≡ Menu

Ninja Tricks for Migrating NX CAD Data

Share the knowledge

One thing I’ve had the opportunity to do several times is to migrate NX CAD data into Teamcenter. Every time I’ve learned something and figured out a new trick or three. I think I’ve about got it down now, so here’s as summary of my favorite “ninja tricks” for migrating NX files. I won’t go as deep into some of the technical how-tos as I usually do because if I did I’d have to write a book. Besides, some of the topics have been covered elsewhere already here on the Dojo.

I won’t presume that the business requirements I’m working with are appropriate for your specific needs. However, If you’re preparing to do a large NX migration I think you’ll find something useful in the following ideas.

Go large, or go home

Ninja’s ain’t interested in onsey-twosey migrations that can be done using the import dialog or the utilities. They’re interested in the big migrations that are too large to do piecemeal. Ninja migrating tricks will allow you to migrate the data quickly and consistently, but they take time to set up. If you have 100 files to migrate, just do them “by hand”. If you have one million, go ninja.

Don’t just use autotranslate, use the hell out of autotranslate

Autotranslate is a favorite customization of mine to write. Sure, it appears simple on the face of it. Take abc123_01.prt and turn it into ABC123/01. But, oh, you can make it so much more sophisticated than that. Here’s some of the things I do with it.

Fix item IDs consistently

Want to fix those bad item IDs? Have you already renamed parts in Teamcenter and need to make sure references are updated in the other files you migrate? Make a mapping file. Two columns, from and to. Read it in the first time you call autotranslate (and cache it so you’re not re-reading it every time). If the filename you’re processing is in the mapping file’s “from” column, translate to the ID in the “To” column, otherwise, use the default logic.

Dodge collisions

Site A has a foobar.prt. Site B has a different foobar.prt. First you migrate site A. Then you migrate site B. You have to make sure that foobar.prt assemblies at site B don’t end up looking at site A’s foobar.prt after the migration.

My solution: Before migrating site B, dump a report out of Teamcenter listing every item ID. You can use ITK or SQL or whatever works. Then autotranslate does its normal translation, but checks that file before returning a result. When it finds FOOBAR listed there it changes FOOBAR into SITE_B.FOOBAR and the collision is avoided.

Lie about revision ID

I have a component at revisions A, B and C. I only care about the latest revision, so I migrate revision C. Later I migrate an assembly that uses revision A. But I want the migrated assembly to use the rev C I already migrated. I don’t want to migrate rev A out of order so that it’s newer than C. In fact I don’t want it at all. So I look in that listing file I used to dodge collisions. You see, what I didn’t tell you already is that it also lists the latest revision ID of each file. So, I do a simple comparison of the revision I’m migrating and the current high rev in TC. If the native file has a higher rev, I go ahead and use that. But if TC’s rev is higher, I use that one instead. Oh yeah, I use the “use existing” option.

Let me summarize what happens.

  • I’m about to migrate a file at rev A.
  • But, I look in my listing file and see that I already have a rev C in Teamcenter.
  • So, I tell TC to refer to the file as rev C in TC, instead of its actual rev, A.
  • Since I’m using the “use existing” option, the file isn’t actually moved into TC. Instead, the assemblies that use the file at rev A are repointed to the Item Revision at rev C.

Use convert callbacks

Convert callbacks are the second type of custom callback I like to implement to ninjafiy (is that a word?) my migrations. In short, if you don’t explicitly tell the cloning operation which values to use for certain fields, it will look to see if you registered a convert callback for that particular field and call that. Here’s a list of what type of convert callbacks you can register:

  • Item Type
  • Owning Group
  • Owning User
  • Name
  • Description
  • Associated files directory
  • Checkout comment

Personally, I’ve used them to set Item type.

Implement autotranslate & convert, review, repeat

Okay, now that you got your autotranslations and your converts implemented, get a listing of every single file and run your autotranslate and convert functions against Every. Single. One. and dump the results out to a csv file. I just write a simple wrapper program that processes a list of filenames and calls my functions for ever single one.


Seriously, lay off the extra shot of expresso in your coffee, kay? It really isn’t as bad as you think. No, really. If it takes more than a minute to process a million file names you’ve probably have a flaw in your code (I’d check that you’re not re-reading the mapping and listing files for every single translation. This is a good time to use static variables.)

The thing is, when you get that csv file and open it up, you’ll find all the oddball cases you didn’t realize you had to deal with. Or maybe you never got the file to look at because your autotranslate blew up somewhere on a filename it totally didn’t expect. “What the hell? Someone named a file 1.prt? What were they thinking!!!!” It’s a lot better to blow up now than later when you’re actually migrating data.

Use notify callbacks

Notification callbacks are a cool way to to customize the behavior of cloning operations — which if you didn’t know, are how NX imports files into Teamcenter. Basically there dozens of types of events which occur during a cloning operation, and you can register custom functions to run against any of them. Or all of them. Look up UF_CLONE_notify_callback_t in the NX docs to get the full list. Here’s a few of my favorites.

Post-actions against UF_CLONE_end_part_clone_cb

The UF_CLONE_end_part_clone_cb message is called immediately after migrating a part file. I like to register a bunch of post-actions against this message to finish off the migration properly.

  • Log migrations. First, I write to a migration log file what was migrated and when
  • Move files. Second, I move the files out of the migration directory completely (more on why I do this later).
  • Apply status. Third, I apply the appropriate status to the new item revision
  • .

Cleanup operations against UF_CLONE_terminate_cb

UF_CLONE_terminate_cb is the last thing called during a clone operation. It’s a good time to do any cleanup or resetting of parameters you need to get done before the next migration.

Generate your own syslog!

This may not seem like a big deal at first, and it was a pain in the butt to get working right, but believe me, I’m glad I took the time to do it. I created a logging function that I registered for every single message. The message emitted to the log file tells me what message is being processed and any data associated with the particular message. This does two things for me:

  1. Teaches me a a lot about the inner workings of the cloning operations. I get to see what it does at a very fine level of detail, and that knowledge has been very helpful to have
  2. When something crashes the migrations (and eventually, something will), I have a very good record of exactly what was going on a the time of the crash. It’s not always going to be immediately clear why it crashed, but having the log files helps zero in on the problem quickly.

    Move the files to a local disk (and filter out the junk)

    Don’t try to migrate from off of the network. It’s too slow and flakey. Copy the files to a machine you control. (I bet you’ll be surprised at how little space a bajillion files really take up). And get rid of extra revisions and duplicate files while you’re at it. Get the pre-processing done up front, don’t try to pick and choose which revision you’ll migrate while the migration is in process. Just don’t.

    Flatten the folder hierarchy as much as possible (but no more)

    Once you have the files local, flatten the directory structure as much as possible. If you can put all of the files in one directory, do it. This exercise will flush out all the duplicate files. The only time I’d keep more than one directory is if the directory structure tells you something about how to migrate the files. For example, if you have a directory for “Released” and another for “In Work” you may apply different statuses in Teamcenter.

    Remove “lost” family members from the migration directory

    When you migrate part family members you have the option to “treat them as lost” or to convert them to “normal” parts. The idea is that part families will likely be migrated by migrating the template file at some point and then generating the family members, and this could happen before or after any given migration. So “treat as lost” updates the references to family members to their TC equivalents with the expectation that sooner or later the actual TC family member will exist. Treat as Lost pretends that the files aren’t actually around to migrate so it just updates the references. Here’s the thing though, if a family member is “lost”, but is actually present, the clone operation will ignore anything autotranslate has to say and look in the template file for an Item ID to map to. On top of that, the clone operation will look inside the family member and process any references it finds inside of it. At best, that takes time. At worse, if finds something in there that kills the migration process. (I’ve had it happen).

    Migrate “Bottom up” (and hide the bottom)

    The fewer files you migrate, the less chance there is for something to go wrong. Can we agree that that sounds reasonable? How about, once we do something we shouldn’t keep re-doing it over and over?

    Okay, then. Let’s back up. Here’s how cloning operations work. You ask NX to import an assembly. It looks inside the assembly file for any references to other files. Then it looks to see if it can find those files. If it finds them, it looks inside of them for references to more files, etc. etc. All of this will happen regardless of whether or not any of the component files have been migrated already. It then goes through all of the filenames it found and starts assigning item IDs and item types, etc. But if it can’t find the referenced files it’s not an error. It just treats the files as “lost”, assigns an item ID to the reference, and goes on its way. So, if you import a top-level assembly which has had all of it’s components already migrated and it can find all of those component files it will process every single reference in every single file again, even though nothing can change because the components are already migrated.

    So, what I do in order is to keep each individual migration as small as possible and to avoid reprocessing the same lower level files over and over. To do that I try to migrate the assemblies from the bottom up, and then once a file has migrated I move it out of my local migration directory using that post-action on UF_CLONE_end_part_clone_cb that I mentioned earlier. So by the time I get to the top level assembly all of its components have already been migrated and they’re not reprocessed again. That would just take up time.

    Oh, how do I go about ordering the migrations so they’re done bottoms-up? I don’t do a thorough analysis of what-part-is-used-where. I go by drawing size. I order the drawings in order from smallest to largest and migrate them, and their constituent files, in that order. It’s not a perfect calculation, but I find that it’s a good enough heuristic for my purposes.

    Every migrated rev should be statused and read only

    Okay,last one. When I migrate some of the files may be considered to still be “in work”. Whatever you do, don’t leave them in TC as “in work” after you migrate. You don’t want people mucking around with those. Let them roll to the next rev to do their work. Pick a status name that means “this was migrated while still in work”. If nothing else, if you find out that something went wrong with the import you want to be able to remigrate the revs. That gets harder to do when people have gone and started working on them arleady.

    What else?

    Okay, I’ve probably omitted a couple of tricks, but those are the big ones that I know.
    What tricks do you have? Would any of these saved you any work on a migration you’ve done? Do you have any other suggestions?

  • Teamcenter Heretic

    I’m curious to hear form the readers… who did program by program migrations and who did “big-bang” style ones?
    What were your experiences and wht would you do different if you had it to do over again?

    • Aaron Ruple

      When we implemented Teamcenter we were given the challenge to support every Use Case imaginable. All at once, one by one, a cluster based of project of only particular team members. All at once, really applied to our Release Data; meaning IT handled all this and we did all the prepping. I presented a presentation at multiple time regional PLM Worlds around the approach we developed. Very similar to what Scott outlines in the bottom up procedure. We actually wrote a complete user front end application to assist in the data migration experience with extensive back end coding. We actually called the application “BUMP”, or Bottom Up Migration Process. We overcame the parent child relationship challenge from multi-level to flat by creating a data table and unique ID associate to every Part File we had. We then developed a Parent / Child relationship for each level and stored. When we process the data after we start at very Bottom of structure. We know this as we have the Top request we have extrapolated each Child, determined if it is a Parent and so forth. Once we have all structure in a data table, we then start at bottom and work our way programmatically up the structure until all data is imported. We even send an e-mail at end to user that their request was successful! There is a lot of code development, and we now have a very strong understanding of NX Structures and relationships, but over last 5 years we have migrated some 500,000 Parts with little problem. I will get the presentation uploaded soon.
      Aaron Ruple – Navistar Inc

      • http://plmdojo.com/ Scott Pigman


        It sounds like we’ve done similar work. I’d like to see your presentation.

        For my first bulk NX migration I also wrote a custom user front end application. I used ug_edit_part_names -list recursively to build up a sqlite database of dependencies between files and then mapped the actual file referenced to the latest revision of that file, wherever it may be –often in a different directory. I then used the database to drive the migration. It did work, but it was complicated to set up.

        The steps I outline above — copy the files to a flattened directory structure, remove duplicates and prior revs, then migrate the smallest drawings first — worked as well or better for my later migrations and took a lot less effort to set up.

  • gmail

    I applaud your Humour at Foobar Sir 😀

  • AP

    this is brilliant Thank you for your website”!!

  • Kunal Bhavasar

    Which is the best option rather than OpenPDM to sync data from other PLM to Teamcenter??

  • Heman Patil

    I have used IPS loader to migrate 2-D data like AutoCAD drawings to Teamcenter. Can we used IPS loader to migrate NX data to teamcenter? if Yes, has anybody been successful with IPS and any hints on the prep work that needs to be done before migration?

Optimization WordPress Plugins & Solutions by W3 EDGE