This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Using DBBulkImport API with current JobServer user.

I'm looking to use this Object API for the bulk import of thousands of 1IM object - however the SDK documentation only details a single initializer

Public Sub New ( _
	tableName As String, _
	connectionString As String, _
	dbFactory As IDbFactory, _
	authString As String _
)

Does anyone know of a way that this API can be used with the current Connection / User as would be available on the Job Server ?

I want to avoid having to provide the username / password in the authstring parameter - rather utilizing the existing connection.

Thanks
Parents
  • The use-case scenario I'm looking at is the import of 10 x thousands of objects into 1IM.

    When testing standard Object Layer - create object - save - templates/event/customizer firing - model - the performance was rather slow. It appeared to be mainly due to the template/event/customizer firing. I'm looking at improving the performance of the import script by using DBBulkLoader to write the objects - then generating a job to execute templates - and any other event/customizer.

    This model permits the fan-out of the template/event/customiser calculation - providing scalebility- and a major performance benefit. In addition it separates each aspect of the process - permitting failure of one step in the process - without effecting the others.

    I'm aware that the same thing could be accomplished via direct SQL - but thought by using a built in API - I would be able to leverage the existing Job Server connection information. I suppose the benefit of direct SQL is that an authstring will not be required - as the Connection String will provide the only authentication required.

    I was hoping to avoid having a config param stored with the user authstring - and leverage the existing sa connection when using the DBBulkImport - if this is not possible - I guess I'll just have to revert to using the stored authstring.

    Thanks for your input.
Reply
  • The use-case scenario I'm looking at is the import of 10 x thousands of objects into 1IM.

    When testing standard Object Layer - create object - save - templates/event/customizer firing - model - the performance was rather slow. It appeared to be mainly due to the template/event/customizer firing. I'm looking at improving the performance of the import script by using DBBulkLoader to write the objects - then generating a job to execute templates - and any other event/customizer.

    This model permits the fan-out of the template/event/customiser calculation - providing scalebility- and a major performance benefit. In addition it separates each aspect of the process - permitting failure of one step in the process - without effecting the others.

    I'm aware that the same thing could be accomplished via direct SQL - but thought by using a built in API - I would be able to leverage the existing Job Server connection information. I suppose the benefit of direct SQL is that an authstring will not be required - as the Connection String will provide the only authentication required.

    I was hoping to avoid having a config param stored with the user authstring - and leverage the existing sa connection when using the DBBulkImport - if this is not possible - I guess I'll just have to revert to using the stored authstring.

    Thanks for your input.
Children
No Data