1. Get rid of all advertisements and get unlimited access to documents by upgrading to Premium Membership. Upgrade to Premium Now and also get a Premium Badge!

Did UCM supports FTP uploading of files? urgent please..

Discussion in 'Oracle Webcenter Suite (formerly Oracle ECM)' started by sivavp1, Dec 19, 2010.

  1. sivavp1

    sivavp1 Forum Advisor

    Messages:
    41
    Likes Received:
    0
    Trophy Points:
    100
    Location:
    Bangalore
    Hi

    Did UCM supports FTP uploading of files?
    It is for looking at migrating Documentum system into the existing UCM.

    Thanks
    Siva
     
  2. Sadik

    Sadik Community Moderator Forum Guru

    Messages:
    1,906
    Likes Received:
    252
    Trophy Points:
    1,455
    ftp is a server side application. FTP-ing your files to UCM server has little to do with UCM application. However what you are looking for is mass conversion of content from documentum to UCM. I am not aware of an out of the box converter. However there is the batch uploader utility. For that you have to first generate the batch script file. Read more in documentation about how to use the batch uploader.
     
  3. dcell59

    dcell59 Forum Advisor

    Messages:
    103
    Likes Received:
    18
    Trophy Points:
    260
    You can also use Desktop Integration Suite. In 11g, you can specify that you want to require metadata to be added to new checkins to a folder, which will help you to set up the common metadata for all of the files.

    As for the original question, I don't know of anything in UCM that provides FTP upload capability. You could write a component that monitors a directory and automatically checks in the files to your server.
     
    sivavp1 likes this.
  4. jason_m_Longoria

    jason_m_Longoria Active Member

    Messages:
    23
    Likes Received:
    3
    Trophy Points:
    90
    Both of the suggestions are correct, even with the FTP function all you are doing is moving documents to a staging area so you can build your batch file. The quickest way to get the documents into the system would be to through a programmatic check-in process which is not very difficult and there is a ton of supporting documentation. The other option would to use a federation tool that would allow you to assume ownership over the data-files. This is new to 10g 11g. This is considered a "manage in place" methodology. If you have any other questions please feel free to ask

    Jason M Longoria
     
  5. jason_m_Longoria

    jason_m_Longoria Active Member

    Messages:
    23
    Likes Received:
    3
    Trophy Points:
    90
    be weary of using DIS as it has known issue for moving to many documents at one time. also you would have to have the meta data set by a doc type, or some profile. or you would enter it for every checkin if it varies. Dis is really for the contribution folders. The worst part is you would have to use a manual process for validation. So depending on the requirements you have a few things to look into.
     
  6. dcell59

    dcell59 Forum Advisor

    Messages:
    103
    Likes Received:
    18
    Trophy Points:
    260
    I'm pretty sure that 11g fixes the problems with copying lots of documents, but you are right that this is a problem in older versions. The new 11g metadata prompts feature helps with entering metadata for a large group of documents, but if you have to generate different metadata for each document (other than obvious things like the content ID and title), you'd need to create a custom uploader or metadata updater.

    Could you tell me a little more about what you mean by "The worst part is you would have to use a manual process for validation"? Are you talking about verifying that what you copied to the server actually made it, or something else?
     
    jason_m_Longoria likes this.
  7. jason_m_Longoria

    jason_m_Longoria Active Member

    Messages:
    23
    Likes Received:
    3
    Trophy Points:
    90
    You are correct Im talking about validating that the documents actual made it into the system and well you can find them as well. lol.... I would agree that the first option of programmatic check-in with a custom validation loop in your code. by custom I mean specific to the actual move of the docs, this piece of code would be modified as you moved across repositories. Basically I have seen a few different ways to do this. If you would like more info let me know and Ill post a couple of different ways to validate.
     
  8. dcell59

    dcell59 Forum Advisor

    Messages:
    103
    Likes Received:
    18
    Trophy Points:
    260
    I would be interested in the validation methods.

    BTW, this may or may not be relevant, but I am one of the DIS developers. I mostly concentrate on the Office add-ins, but I do have some experience in WEI.
     
  9. jason_m_Longoria

    jason_m_Longoria Active Member

    Messages:
    23
    Likes Received:
    3
    Trophy Points:
    90
    Thats really good to know, as I am going to start another thread on the V-Lookup using the dis client, there are so many issue with they way its been implemented, and they are not taking NO for an answer. Please respond! lol.... Ill post some of the validations that I have used this afternoon, they are not super complex, but very thorough! Ill post when I get outta my meeting.


    Thanks
    Jason M Longoria
     
  10. dcell59

    dcell59 Forum Advisor

    Messages:
    103
    Likes Received:
    18
    Trophy Points:
    260
    Last spring when we found the problem with copying large trees of data with DIS Windows Explorer Integration (WEI), it was a real problem to validate all of the files. The reason for this is the DesktopTag component, which modifies some documents. For example, when a Word document is downloaded, DesktopTag adds a set of custom properties to the file, which are used to help with some operations (in particular, it allows you to move a document to another system and still be able to check in revisions). When the file is checked in, these properties are removed. To make things more interesting, the CleanContent component, which is used by DesktopTag, uses a better compression than Word does with docx files. I've seen situations where a Word document comes down from the server and is half the size of the same document after editing it with Word and making no changes other than re-saving it.

    Anyway, as a result of this, we can't simply do a checkin/get/compare operation to verify that the data made it. I do think it would be useful to at least have a way for CHECKIN_UNIVERSAL to validate that the file that got to the server is what was sent. Even just a checksum/hash argument that could be checked might be useful.
     
    Sadik and jason_m_Longoria like this.
  11. sivavp1

    sivavp1 Forum Advisor

    Messages:
    41
    Likes Received:
    0
    Trophy Points:
    100
    Location:
    Bangalore
    Thanks all for your info.