Source Control (git) for Identity Manager

Hi All,

Wondering if anyone has come up with a means of getting away from zip file transports and into storing text-based configs in source control such as git. 

The options I see are:

Use transports for extraction

  1. Developers use the standard OneIM tools to make changes.
  2. Developer uses change labels to tag changes as normal.
  3. Developers extract transports for their change label.
  4. Developers run some kind of automated script to 'explode' the transport and place the DbObjects into a standard directory tree within source control

The script in step 3 would need to be able to merge the DbObjects in a given transport with already existing objects in source control.

Completely ignore transports

  1. Developer uses standard tools to make changes.
  2. Developer uses change labels to tag changes as normal.
  3. An external tool is run with a change label or list of change labels as input.
  4. This tool could then examine each label and enumerate objects to be imported to source control.
  5. Objects are exported into a standard directory tree within source control, using a file format/structure deemed appropriate (not necessarily transport XML).

Personally I like the first option as it uses the OOTB tool (transporter) to extract config. It also means that I could theoretically rebuild a transport based on source control commits and import that with the OOTB transporter (for CD).

Anyone have thoughts, learnings etc on the topic?

Parents
  • Change Management is one of the most difficult aspects of One Identity.  The problem becomes clear if you go into Designer and just start clicking around, changing fields, adding and deleting things and then do a "Commit to Database", you'll see that even things which seem minor, result in updates to countless tables.  As you mentioned, the 'change-label' assigned to a set of changes is stored as a huge block of XML in the transporter's ZIP file.  As XML is text, you could theortically save it to GIT or somewhere.  But I doubt this will be of any use to anyone.

    One thing I do do is to use the System-Library for Visual Studio (SystemLibrary.sln).  When you run this, you can select "Library / Recreate system library...", and it will collect together all the code snippets from Templates and Scripts and write them to several large visual-basic files which can easily be brought under source-code management.  It's better than nothing.

    Asside from this, a college of mine has created an Import/Export tool.  You can define export-templates for each table, and then use where-clauses to select and export data that needs to be transported.  He has templates for things like Overview-Forms and Job-Chains that then gather all related records from the relevant tables, and export them to a single text file.  These files can be loaded into a different system as an alternative to transporting changes.  But, they could also be used in source-code control to manage changes.

    What I also have is a reporting tool with hundreds of queries for extractings things such as the entire menu-tree in Manager, or everything published to the IT-Shop, together with all the tiny little details.  I can run a report, make some changes, rerun the report, and the tool will highlight all differences.  It at least gives me a high-level overview of what has changed.  This works for Designer and the Sync Editor, but is not much help with IT-Shop web-app changes, unfortunately.   What it does do is allow me to document changes to data.  I can read in a 100000 object hierarchy from AD.  Make some changes to Templates.  Apply the changes, reload the report, and produce a Diff to show every attribute that has changed.   ...point is... you need to not only think about code.  Data can be just as important. 

Reply
  • Change Management is one of the most difficult aspects of One Identity.  The problem becomes clear if you go into Designer and just start clicking around, changing fields, adding and deleting things and then do a "Commit to Database", you'll see that even things which seem minor, result in updates to countless tables.  As you mentioned, the 'change-label' assigned to a set of changes is stored as a huge block of XML in the transporter's ZIP file.  As XML is text, you could theortically save it to GIT or somewhere.  But I doubt this will be of any use to anyone.

    One thing I do do is to use the System-Library for Visual Studio (SystemLibrary.sln).  When you run this, you can select "Library / Recreate system library...", and it will collect together all the code snippets from Templates and Scripts and write them to several large visual-basic files which can easily be brought under source-code management.  It's better than nothing.

    Asside from this, a college of mine has created an Import/Export tool.  You can define export-templates for each table, and then use where-clauses to select and export data that needs to be transported.  He has templates for things like Overview-Forms and Job-Chains that then gather all related records from the relevant tables, and export them to a single text file.  These files can be loaded into a different system as an alternative to transporting changes.  But, they could also be used in source-code control to manage changes.

    What I also have is a reporting tool with hundreds of queries for extractings things such as the entire menu-tree in Manager, or everything published to the IT-Shop, together with all the tiny little details.  I can run a report, make some changes, rerun the report, and the tool will highlight all differences.  It at least gives me a high-level overview of what has changed.  This works for Designer and the Sync Editor, but is not much help with IT-Shop web-app changes, unfortunately.   What it does do is allow me to document changes to data.  I can read in a 100000 object hierarchy from AD.  Make some changes to Templates.  Apply the changes, reload the report, and produce a Diff to show every attribute that has changed.   ...point is... you need to not only think about code.  Data can be just as important. 

Children
  • Sorry for such a delayed response, but thanks for your post!

    I think the XMLs would be quite handy - they are able to represent almost any object in the database, and the transporter is now pretty well versed at managing dependencies etc.

    My ideally flow would be as follows:

    1. Do dev work in any of the tools.
    2. Add changes object and dependencies to a transport.
    3. Save transport to disk as normal.
    4. Expand the transport into a standard/repeatable structure using a tool (To be created)
    5. Create a feature branch in source control
    6. Commit new/changed/deleted XMLs to the branch
    7. Repeat 1-6 until Feature is complete
    8. Merge the Feature branch into a branch representing the next environment (i.e. uat, master (prod)) 
    9. Trigger packaging of the changes made by the merged branch (not all changes) - i.e. make a transport (To be created).
    10. Push transport (automatically) to deployment host with tools on it.
    11. Run command line transporter on the transport.
    12. Repeat steps 1-11 for each feature.

    The only bit that I need is the tools to carry out steps 4 and 9. Which I don't think would be too hard...would be awesome if OneIM would just put some extra CLI switches into their transporter to allow both of those things to happen!

    Glen.