≡ Menu

Git, and why We Need Distributed PLM

git logo
Share the knowledge

Like any good software developer, I use a source control system daily. But I’ve fallen behind the times. The latest source control paradigm out there is something called a Distributed Version Control System (DVCS). The two main DVCS’s are Git and Mercurial. GitHub, which hosts git projects, seems be getting written up weekly in technology and business publications. I’m playing catch up, but now I understand what the big deal is. The PLM world needs to take notice. We need Distributed PLM systems.

Here’s why.

What is Distributed Version Control?

Before we talk about PLM software, we need to talk about software source control. Just hang in there if you’re not a developer yourself. I’ll get back to PLM in a bit.

I Was Blind…

I had heard of Git a few years ago, but the point of it had eluded me. Something about everyone having a full copy of the repository. Say wha…? Whatever. I’ll just stick with Subversion, thank you. After all, that’s a modern system. It’s so much better than CVS, or so I hear.

I started to realize that I had missed the boat when I attended the 2012 Global Day of Code Retreat, (which, by the way, was an awesome event — I highly recommend it). Sitting elbow to elbow with some really sharp professional programmers, I kept hearing, Git this…, and, Git that…,. In fact, the organizers of the event had recommended that everyone have Git a repository on their laptops for the retreat.

The next clue was when I installed Aptana Studio on my laptop to work on a little python project of mine. Guess what, it came preconfigured to work with a Git repository. So I set one up for myself to work with, got a free GitHub account, and used Git for the first time.

But I still didn’t get what the big deal was.

…But Now I See

Recently I was checking out Joel Spolsky’s blog, Joel On Software. If you’re a software developer, you need to be following Joel’s work. Even if you haven’t heard of him, you’re probably familiar with some of his work. Among (many) other things, he’s one of the cofounders of Stack Overflow. And if you’re someone who employees software developers, you really need to be reading Joel. In particular, go read what he has to say about desk chairs and private offices. Please. Do it now, I’ll wait. I have to go refill my coffee anyhow.

Okay, everyone back now? Cool. Let’s get back on track.

So Joel has a recent post about how he’s come to realize that distributed version control is superior to centralized version control

In order to explain to the rest of us why he’s become a DVCS convert, Joel put together a tutorial on Mercurial with a special Re-Education section for those of us familiar with Subversion.

Joel on Subversion

Here’s what Joel has to say about Subversion:

Now, here’s how Subversion works:

  • When you check new code in, everybody else gets it.

Since all new code that you write has bugs, you have a choice.

  • You can check in buggy code and drive everyone else crazy, or
  • You can avoid checking it in until it’s fully debugged.

Subversion always gives you this horrible dilemma. Either the repository is full of bugs because it includes new code that was just written, or new code that was just written is not in the repository.

Subversion team members often go days or weeks without checking anything in… All this fear about checkins means people write code for weeks and weeks without the benefit of version control

Why have version control if you can’t use it?

Centralized Version Control, Illustrated

Here’s how Joel illustrates life with a centralized Subversion Repository:
Subversion Repository

Everyone has a local working copy of the code base which they periodically synchronize with the master version on the server. Or not.

Joel on Mercurial

Now we start to get to the what I had missed regarding Distributed Version Control Systems. True, every user has a local repository. But there’s still a central repository. Users check work into their local repository while they’re developing, and then merge their changes into the central repository.

Distributed Version Control, Illustrated

It looks like this:

NewImage

I’ll let Joel explain what this means.

So you can commit your code to your private repository, and get all the benefit of version control, whenever you like. Every time you reach a logical point where your code is a little bit better, you can commit it.

Once it’s solid, and you’re willing to let other people use your new code, you push your changes from your repository to a central repository that everyone else pulls from, and they finally see your code. When it’s ready.

Mercurial separates the act of committing new code from the act of inflicting it on everybody else.

And that means that you can commit (hg com) without anyone else getting your changes. When you’ve got a bunch of changes that you like that are stable and all is well, you push them (hg push) to the main repository.

The Problem with Centralized PLM

That our current PLM systems follow the centralized data model shouldn’t be a surprise or controversial. That’s just how it is. The question is, why is that a problem? After all software development is completely different from designing Airplanes and Automobiles, right?

No.

Our PLM users are facing the same d*** problems that software developers face.

Worse than that, not only are current PLM systems not as good as a Git or a Mercurial, they’re not even as good as Subversion.

So what’s wrong with centralized PLM? Recall the primary problem with Subversion that Joel highlighted, When you check new code in, everybody else gets it.

Since most of use use PLM to manage CAD data, let’s look at how that plays out for CAD.

Option A: Check in bad designs

I hope that it’s uncontroversial that designs aren’t perfect before they’re finished — if then! If you’re an NX user in a Teamcenter environment, every time you save your work you’re checking in a new change to the central repository. Congratulations, you’ve just polluted the system with your junk (sheesh, that’s sounds dirty). Oh sure, we have statuses and workflows and revision rules to make sure that other users don’t see your junk unless they want to (that doesn’t sound any better) but that stuff is hard to understand. Just last week someone made the comment that, I’ve run into very few engineering Organizations that understand precise/imprecise and Revision Rules. In fact, my post on understanding revision rules is the one of the most popular posts on this site.

Option B: Avoid check in

The other option is to avoid checking in your work until you’re sure it’s ready. While this isn’t an option for NX, most of Teamcenter’s CAD integrations allow this behavior. Typically, CAD integrations copy files from Teamcenter down to a local working directory from which the CAD application works with the files. The central Teamcenter repository is not updated until the user manually checks in their work… which could be days, if not weeks, later.

So, what exactly is the benefit to thd user of using a PLM system?

The Promise of Distributed PLM

Do you see now that we have the same problems with PLM software that Joel was describing with centralized source control systems? So let’s imagine that we’re living in a future world where we have a distributed PLM system. And robot butlers and flying cars. Not that they’re relevant, but they would be so damn cool.

[box type=”info”]I am not talking about Classic or Global Multisite here. In order to get close to what I mean by Distributed PLM here every single user would have to have at least one personal instance of TC that was multi-sited back to the central site. That may be theoretically possible, but that would be a very heavyweight, and cumbersome, implementation. I suspect that a more usable implementation would maintain only the delta between what a user had checked into his or her own private repo and the central repository.
[/box]
So imagine that you’re a CAD user and in addition to the central repository that you’re used to you have a private repository. Now when you save your NX model or check in your ProE model you’re checking into your own personal repository. The main repository knows nothing of your work until push your changes to it. We’re not putting unfinished work out where other users can find it but we still have the benefits of version control.

Let’s noodle what that means. For starters, revision rules become a lot less important.

#ifdef vs. Revision Rules

While running down the shortcomings of Subversion, Joel brought up the topic of branching and merging (which I’ll get to shortly myself) and how it doesn’t work very well in Subversion.

[A]lmost every Subversion team told me…they swore off branches. And now what they do is this: each new feature is in a big #ifdef block. So they can work in one single trunk, while customers never see the new code until it’s debugged, and frankly, that’s ridiculous.

Keeping stable and dev code separate is precisely what source code control is supposed to let you do.

Good lord, what an ugly way to write code.

#if TC_VERSION < 8
int foobar(tag_t rev)
{
 	// imlementation for TcEngineering
	...
}
#elif TC_VERSION < 9.0
int foobar(tag_t rev)
{
	// implementation for TC 8.x
	...
}
#elif TC_VERSION < 10.0
int foobar(tag_t rev)
{
	// implementation for TC 9.x
	...
}
#else
int foobar(tag_t rev)
{
	// implementation for TC 10+
    ...
}
#endif

Egads. Thank God we don’t have to deal with that mess in Teamcenter, right?

Wrong.

We do the same exact thing. We just use revision rules instead of #ifdef.

Don’t believe me? Pretend that foobar was an item instead of a function.

  • Foobar
    • Foobar/01 (Frozen)
    • Foobar/02 (Frozen)
    • Foobar/-.01 (Manufacturing Preview)
    • Foobar/A (Released)
    • Foobar/B (Released)
    • Foobar/C (Unstatused, owner=Scott)
    • Foobar/D (Unstatused, owner=Joel)

Tell me that this isn’t basically how we select which revision to load in an assembly.

#if RevisionRule == "Precise"
LOAD(foobar/01)
 
#elif RevisionRule == "Latest Frozen"
LOAD(foobar/02)
 
#elif RevisionRule == "Latest Manufacturing Preview"
LOAD(foobar/-.01)
 
#elif Revision Rule == "Latest Released"
LOAD(foobar/B)
 
#elif RevisionRule == "Latest Working, current user is owner"
LOAD(foobar/C)
 
#elif RevisionRule == "Latest Working"
LOAD(foobar/D)
 
#endif

Holy crap, we have done the same thing that the Subversion users ended up doing. We’ve put everything into the “trunk” of the central repository and then we have a bunch of complicated rules which none of the users really understand in order to figure out which version of the model we should be seeing at any given time.

And this brings me to the other point I wanted to make about what’s missing from PLM. The Subversion users ended up with a crappy #ifdef code implementation because branching and merging in Subversion doesn’t work very well.

We ended up with a complicated set of release statuses and revision rules because we never had the opportunity to branch our designs. Teamcenter just doesn’t support it. I hear that Windchill now offers a branching capability that they adopted from PTC’s older IntraLINK product. If any other PLM systems support branching, I’d love to hear more about it.

Branching and Merging

Now we get to why I said earlier that what we have now in PLM software isn’t even as good as what Subversion users have. Despite its problems, Subversion does have the ability to create an independent code branch for development and then merge that back into the trunk. Teamcenter forces us to just put all of our changes directly into the trunk.

Let’s return to our future world of robot butlers, flying cars, and Distributed PLM. And let’s stipulate that in this world we can branch our designs. If I want to propose a change I don’t create a new revision of the model, I create an independent branch of the design that only I can see. When I look at my branch I see the same things that everyone else sees except for the things I’m changing. But no one else sees it unless I share my branch with them. My branch could change a single model, or it could change an entire assembly. I do my work in that branch. When I want to submit the proposal for review I share my branch for review. Only if it’s approved do I merge my updates back into the central “trunk” of the repository, making them available for all. If my proposal is rejected, I just… do nothing. My branch can sit there forever for all I care. It’s not hurting anybody. But if the Powers That Be finally realize that my proposal was right, then it’s there, ready to be revived. Think about how much cleaner that is than having a everything that’s ever been attempted, accepted, and rejected living forever under the central item.

I won’t get into why Joel says that branching and merging is better under Mercurial than under Subversion, but it is interesting. (Briefly: Subversion tracks versions, Mercurial tracks changes.)

This is a Big Deal

If you haven’t figured it out by now, I think this is a big deal. We tend to think of PLM and Source Control as being separate worlds, but they’re really dealing with very similar problems. But while source control systems have been evolving, the central core of how PLM works seems to have stagnated a decade or more ago. I imagine that PLM vendors, always looking or a new feature to sell to a new customer (or use to retain an existing one) aren’t spending a lot of time rethinking the fundamental model of version control they’re built upon. Look! Shiny object!

It’s time PLM starts to adopt some of the capabilities source control systems are providing. This won’t be an incremental improvement, We’ve redesigned the interface to reduce the number of mouse clicks a typical user makes in a day by 5%! No, this will be huge.

In closing, Joel Spolsky compares Subversion and Mercurial by saying,

If you are using Subversion, stop it. Just stop. Subversion = Leeches. Mercurial and Git = Antibiotics. We have better technology now.

Not only are our PLM systems not yet on the level of Antibiotics. Without support for branching and merging, they’re not even on the level of leeches. I’m not sure what quack medicine was considered state-of-the-art before leeches came into vogue, but that’s about where we’re at. Goat sacrifice maybe. And we’re the goats.

I’m really hoping we’ll see Distributed PLM in the future. As a Teamcenter guy, I hope Teamcenter implements it first. If not, Windchill or Aras or one of the others that I can’t think of right now might just use this to gain a market advantage — and more power to them if they do.

What do you think?

So what do you all think? Am I onto something here? Or do you figure that I must be on something? I don’t pretend to think that this wouldn’t be hard to implement. But I think it would be worth it.

I’m sure there are problems I’ve overlooked, I’m also sure there are ways to leverage branching, merging, and local repositories that I haven’t considered. Please share both in the comments below.

Lastly, if you liked this post, your +1’s, likes, and shares help to get the word out to the rest of the world and will be very much appreciated. Thank you!

  • Teamcenter Heretic

    Sorry, don’t know nuthin bout them +1s and likes, but I do think you hit the nail on the head with your description of Rev rules. They are a PITA for sure, but back in the “early days” before we hade the ability to rack and stack the reuls and when precise was a static characteristic of the BOM it was much worse.

    The easiest way to deal with junk, is to use the “working owned by x” set of rules. This way I can get my working, then my group working then every one else’s frozen… etc.
    I like to use explicit check out when I’m doing a dangerous set of operations on my design because that is the best and easiest way to revert working data.

    • http://plmdojo.com/ Scott Pigman

      If you can’t see the addthis share/like icons on the post it may be because of your browser settings blocking them; it’s probably a javascript setting.But that’s not really all that important.

      I’m looking forward to (IIRC) NX 8.5 which will allow us to force users to use explicit check-out. I hate hate hate implicit check-out & check-in.

  • http://www.facebook.com/randy.ellsworth Randy Ellsworth

    Nice article. It illustrates a couple of basic principles.

    First, that you really need a development and test (quality) environment for Teamcenter that runs alongside production. Changes to the data model, workflows, etc are performed in development and qualified in test before being promoted to production. Standard SDLC practice.

    Second, not all CAD integrations behave the same. If you are running NX then you have multiple versions (not revisions) of the saved model before having to commit by checking it in. NX I-deas doesn’t understand versions and every save is a new revision.

    I like your article and the arguments are sound. Teamcenter is software and software benefits by adding abstraction layers. Decoupling design work is a little more challenging but equally benefits from the lessons that software development demonstrates.

    • http://plmdojo.com/ Scott Pigman

      Thanks, Randy. I think it’s important to sometime look to other domains and look for fundamental changes instead of just looking for the marginal improvements around the edges of how things work now. Of course once you get an idea like that you have to go back to real world and deal with the dissatisfaction of what you have to work with today.

  • Sree Harsha

    excellent and detailed explanation.
    Thanks for enlightening us.

    • http://plmdojo.com/ Scott Pigman

      Thank you for reading :)

  • Don Knab

    Very Nice Scott. I think two of the most frustrating use cases (which you eluded to above), are:

    1) The ability to make some exploratory changes to a released part -the What If scenario-. I want to do this in some managed fashion without bumping up to the next rev letter.

    2) A method of managing multiple changes to a part by two different groups. Change-A might be a slight enhancement, maybe for the next model year. Meanwhile Change-B comes along (a safety issue) which must be implemented asap.

    Use Case#2 can be avoided by following strict Form-Fit-Function rules (and probably should) by creating a new part number. However you don’t always want to go down that path.

    I wonder how other folks are working around some of these issues in Teamcenter. Good discussion.

    • http://plmdojo.com/ Scott Pigman

      Another form of #2 is the contract that stipulates that all RNs must be approved by the customer, who then takes a year or more to review and approve the change while more Change requests are piling up.

      I’ve seen magic rev letters used — A, B, C are real revisions, what-if revisions look like A.01, A.02, B.01, etc. And then that drops us into the revision rule mess.

  • jeppe

    This is a very good article, setting the focus on the old dinosaurs that todays PDM/PLM systems are. If you enjoyed this article you probably will like this TED talk:

    http://www.ted.com/talks/clay_shirky_how_the_internet_will_one_day_transform_government.html

    • http://plmdojo.com/ Scott Pigman

      Thank you, that was very interesting.

    • Teamcenter Heretic

      dude, we can’t even get Siemens to certify other than “antique” compilers. I don’t hold out much hope for anything like branching 😉

      • http://plmdojo.com/ Scott Pigman

        Oh, I know, but sometimes you have to look up from the muck you’re mired in and realize that there are are other paths possible.

  • Pritesh Bhadane

    Great post.. few comments-

    Distributed version control suits software dev, especially open source community where the contributors are many.

    I am not sure if the product development (engineering / Mfg) works in similar fashion. Usually there are limited set of people/team working on a product development and may not require distributed version control or rather can survive with existing revision / configuration rules.

    Also, size of data may be hindrance for getting distributed version control in PLM. But yes with every passing day things are improving with technology and would love to see this implemented in PLM. Looking at the pace of PLM systems implementing the latest technology, I guess it will another decade or so to see any progress on this front.

    • http://plmdojo.com/ Scott Pigman

      Thanks for reading and commenting, Pritesh.
      I disagree with the “limited set of people” assertion. If your product is airplanes or automobiles, for example, you will have a huge team of people working on design. Even if few of them ever load the entire assembly, everyone’s work has an impact on it.

      As for the size of data — to be honest I doubt that if this was ever implemented that the private repositories would be truly local. Volume data surely couldn’t be duplicated (at least in the near term… I’ve seen amazing improvements though, so give it time). Maybe the metadata could, but Oracle licenses are expensive and they charge by the CPU.

      My guess is that a practical way to implement this would involve keeping tracks of just the deltas. A check-in would update your private repository record with an entry for what was checked in, a push would move that to the central repository, but in reality they would both be on the same server.

  • Jeremie FEBURIE

    Nice post (again)

    The ability to create branches inside a single item is definitively missing in Teamcenter and other PLM systems.

    I wonder if this kind of DVCS management systems could improve “offline” mode usage ?

    • http://plmdojo.com/ Scott Pigman

      I wonder if this kind of DVCS management systems could improve “offline” mode usage

      That’s a thought — theoretically users could continue to do check ins locally and then merge into the central repository more easily.

  • Edward Lopategui

    The possibilities are intriguing. A more robust revision engine has been
    something that has occupied my thoughts quite some time. You can teach rev rules to users with time (and a bit of patience) but the branching capability is definitely something without equivalent in the PLM space (to my knowledge anyway). You could approximate it with some heavy customization, but who wants to do that?

    Fully distributed file storage is unnecessary and potentially problematic because you have to deal with various import/export, replica, and failure scenarios. Scott, I think you have the right idea that the partitioning would happen on the server side, but would be transparent to the user with respect to their “personal repository”.

    Branching would have its definitive advantages… trade studies come to mind in an Aerospace context. 3 different users working on three different design variations for which there will be only one go-forward design.

    I think the merge operation is where things may fall apart in concept, because when you are merging code you are merging text files. Merging multi-level branched changes will be dependent on the authoring tool because there will be many scenarios where
    changes from different users have to roll into the same higher level assembly (more than just simple add/remove). You can merge a Word file rather successfully with effort. Tried Excel? Erm. And CAD, not so much.

    But I seriously think the direction of the idea is absolutely correct, the implementation thereof will no doubt be a conundrum. Which is kinda why we are where we are.

    I am also curious about the revision rule hate, they are not the best thing ever, but far better than myriad other functionalities I can think of on several PLM systems. Is the heartburn about precise/imprecise?

  • Edward Lopategui

    The possibilities are intriguing. A more robust revision engine has been
    something that has occupied my thoughts quite some time. You can teach rev rules to users with time (and a bit of patience) but the branching capability is definitely something without equivalent in the PLM space (to my knowledge anyway). You could approximate it with some heavy customization, but who wants to do that?

    Fully distributed file storage is unnecessary and potentially problematic because you have to deal with various import/export, replica, and failure scenarios. Scott, I think you have the right idea that the partitioning would happen on the server side, but would be transparent to the user with respect to their “personal repository”.

    Branching would have its definitive advantages… trade studies come to mind in an Aerospace context. 3 different users working on three different design variations for which there will be only one go-forward design.

    I think the merge operation is where things may fall apart in concept, because when you are merging code you are merging text files. Merging multi-level branched changes will be dependent on the authoring tool because there will be many scenarios where
    changes from different users have to roll into the same higher level assembly (more than just simple add/remove). You can merge a Word file rather successfully with effort. Tried Excel? Erm. And CAD, not so much.

    But I seriously think the direction of the idea is absolutely correct, the implementation thereof will no doubt be a conundrum. Which is kinda why we are where we are.

    I am also curious about the revision rule hate, they are not the best thing ever, but far better than myriad other functionalities I can think of on several PLM systems. Is the heartburn about precise/imprecise?

    • http://plmdojo.com/ Scott Pigman

      I think the merge operation is where things may fall apart in concept, because when you are merging code you are merging text files.

      I agree merging changes to a single file would be problematic. I think it make more sense as an assembly edit; think of a assembly as a type of source file and each component as a line in that file — text merges tend to fall apart when there are conflicting edits on a single line.

      I’ve been noodling another idea about that though— could the geometric representation of a file be separated from the modeling information that defines that? All of this synchronous/direct modeling (I forget which CAD uses which term…) seems to be a step in that direction. Suppose you had a JT File that was defined by multiple model definition files — one for this boss here, another for these holes over there, etc. Maybe I’m getting a bit too wacky now though. Didn’t the latest version of CATIA try to store some their model definition in Dassault’s PLM system, and isn’t that why a bunch of CATIA customers decided to switch to NX instead of converting their models to the new format?

      I am also curious about the revision rule hate

      Besides the issues addressed in the post?

      I have yet to meet a user who understands them. Most users just want to be told which rev rule to use when and heaven help them if things don’t work like they expect. I think they make sense of those of use who are programmers because we’re used to that sort of sequential evaluation of rules. But for most non programmers? Eh, not so much. (certainly there are exceptions).

      • Edward Lopategui

        I’ve been noodling another idea about that though— could the geometric representation of a file be separated from the modeling information that defines that?

        I think you are talking about separating content from format, i.e. like in XML . Should be technically possible, but I think it would require reworking the authoring tools to be merge friendly. Even a seemingly universal XML format like MS Office has its limitations and secrets which needlessly complicates really interesting functionality outside of the authoring app itself.

        Even in the simplest of merged change cases at the assy level, things are likely to fall down. Say two different components are changed from two different branches, and it just so happens they were mated
        together in the assembly. Even if you successfully update the product structure in a merge, good luck picking up after the mess that will ensue because many of the changes will likely be treated as a remove/add rather than an open as.

        Of course what we all secretly want is to be able to merge changes on the same component, or Word document or spreadsheet. That would be an epic win.

        Didn’t the latest version of CATIA try to store some their model definition in Dassault’s PLM system

        From what I understand it seems that CATIA V6 requires ENOVIA regardless of which PLM system is actually being used for data management. This requirement is sure to unearth bad memories of V4 to V5 migrations, and drive CATIA disciples to either A. Run away screaming, or B. Procrastinate and pretend the problem will go away.

        I have yet to meet a user who understands them.

        Well most users have trouble understanding PLM period, but that’s a whole other barrel of posts. I’ve been able to teach users to use revision rules effectively, but it requires focusing exclusively on a few key rules relevant for the business (in my case users were using three primary rules: latest working, an “owner working” rule and straight up
        precise). The trouble with rev rules is under certain conditions they can become very complex or numerous – that’s usually where most users check out.

        • Georges

          I can confirm as a CATIAv5 user, that in order to use CATIAv6 you need to have ENOVIA (Catia’s PLM system) installed, which in turn means you need to have a database + a specific version of a database engine installed and configured exactly right for it to work.

          Just getting the the right database engine, licensing server, database configuration, ENOVIA configuration and all that ready before running CATIAv6 has been a mission impossible to me and my colleagues. Actually after having invested many days of our time, we weren’t able to completely get it running like it should. We’re making the switch to another filesystem based design package (with separate versioning) just because of how impossible the prerequisites of CATIAv6 are to install. It’s the first software ever in my entire (engineering) life, that I haven’t even been able to get running. It’s absolutely downright frustrating, because I would love to be able to use the actual CATIAv6 UI. Dassault however has decided to make that so impossible that I can’t.

          • http://www.eng-eng.com/ Ed Lopategui

            Sounds like a tragedy, Georges. Look me up on my website or LinkedIn; I’d love to hear more from the trenches.

          • http://plmdojo.com/ Scott Pigman

            I’ve heard a little about this — mostly in the context of companies deciding to go from CATIA V5 to NX instead of V6 to avoid being locked into Enovia. If you have to pay such a huge cost to upgrade from one version of the software to the next, why not switch to a software that doesn’t put you through the wringer with every upgrade?

          • Teamcenter Heretic

            I wish my clients were so invested in software that I could always leave the past behind..
            working form a clean slate is always sooooo nice 😉

            as an aside… Nx files almost ALWAYS update from version to version. Even form the antique ones 😉

  • pgarrish

    You’ve highlighted a very real problem with this Scott. There is no easy way in TC to do work in parallel. The crux for this though is to manage dependencies – when I split the design to work on two parallel changes, or two alternative answers to the same change, and then I need to work on a subsequent change (on both paths potentially) how do I map the dependencies that say ‘this design for B works assuming A1 or A2, but this design for C only works with A1 and this design for D requires A1, A2 or B? You can do it at the change level (either textually, with a new relationship or manually) but you almost need to know the specific ‘part’ within A1 that C depends on.

    Also, the distributed nature of large product design teams means that it is far more complex than a s/w branch as there are so many dependencies hanging off of that design change work – documentation, maintenance definition, tooling etc…

    That said, I have seen the reluctance to check-in models, documents etc, for fear of ‘exposing’ work in progress so something that addresses that concern – private version control – is definitely needed.

    • http://plmdojo.com/ Scott Pigman

      Thank you for your comments.

      the distributed nature of large product design teams means that it is far more complex than a s/w branch as there are so many dependencies

      Check out the video jeppe posted, below. There’s a screen shot in there of the dependencies graph for the linux kernel.

      The online documentation for Git looks very good. I’m reading up on how Git manages branching and merging to see what might be applied to PLM.

      One interesting feature of Git that I only alluded to was that it tracks changes, not differences. So if you rename a function and move it to another source file that’s seen as two different changes, a rename and a move, where something like SVN can just tell you that in one file a function was deleted and in another a different function was added. I’m hopeful that by managing changes instead of versions product design we can manage some of the difficulties you address.

  • Dustin Neifer

    Maybe an ER to divvy Home folders up into public and private partitions. The entire Home folder of any given user still being a server side workspace for recovery purposes.

    • http://plmdojo.com/ Scott Pigman

      Noooooo! No ERs! Don’t you realize that ERs are the placebo pills of GTAC? “Here kid, take one of these ERs and quit bothering me.”
      :-)

  • Nigel Shaw

    Interesting post.

    Eurostep recognised the need for a hub approach between organizations in 1999 and developed our Share-A-space product to address it. Share-A-space is a hub for PLM collaboration that addresses the complete life-cycle – from Requirements to In-service. The initial thinking was to address design-manufacture inter-company relationships but we have since use the same approach between requirements tools, analysis tools and ERP as well as PLM. We make use of ISO standards such as PLCS (ISO 10303-239) to do this.

    Yes – distributed PLM is needed and there are even solutions out there to do it. The need to share inter-organization (and inter-discipline) in a managed way is much better understood now. The benefits come from synchronised data/managed change and from greater traceability. In 1999-2002 we struggled to put across the benefits. In 2011 we were a Gartner cool vendor for PLM!

    FYI we have customers who also run enterprise PLM (such as TeamCenter). A key point is to make explicit what is shared externally as well as keep it in sync with the in-house tools.
    Nigel

    • Teamcenter Heretic

      Yes, but how it the integration with the CAD systems handled? Things like inter part geometric relations as well as data that is derived from either the assembly or some other component is extremely difficult to handle.

  • http://www.visual-2000.com/solutions/plm Andy Marsh

    Thanks for such a nice informative post.

    Product lifecycle management (PLM) software and concepts can offer amazing benefit to the development phases at foot-wear and garment manufacturers.

    Experts are starting to realize that a solid, enterprise-wide PLMApparel program, strongly integrated with their enterprise resource planning (ERP)programs,
    can really help them to stay up ahead.

Optimization WordPress Plugins & Solutions by W3 EDGE