Click HERE to see how Saviynt Intelligence is transforming the industry. |
08/02/2024 09:30 AM - edited 08/02/2024 01:13 PM
I have a requirement to tag a large number of privileged entitlements for a client's GCP environment, and am using schema-based entitlement import to do so. I discovered that it is possible to introduce duplicates via the schema-based entitlement import process when entitlements are duplicated in the file. Providing entitlement_value + entitlementtype in the schema import does not prevent this, meaning that it won't update the same record multiple times, but create a new one. I assume the key to making sure this never happens is to make sure you never introduce a file with the same record more than once.
Questions:
Thanks,
Richard
Solved! Go to Solution.
08/02/2024 05:08 PM
Questions:
08/05/2024 10:00 AM
Hi, Rushikesh
I supplied the operation (update) and the entitlement value key in the SAV/CSV files and ran a single test, which created another duplicate entitlement. Can you confirm that there's no way to prevent this from happening when using schema-based import once duplicate entitlement values exist?
Thanks,
Richard
08/05/2024 06:23 PM - edited 08/05/2024 06:29 PM
No. Please raise idea ticket.
Entitlement Value key is auto generated by System
08/05/2024 06:27 PM
ENTITLEMENTID is the unique field, what is the error that you get when you map entitlement id?
Thanks,
Devang Gandhi
If this reply answered your question, please Accept As Solution and give Kudos to help others who may have a similar problem.
08/06/2024 07:13 AM
I just re-performed a test of uploading a new entitlement to the endpoint with an 'ENTITLEMENTID' (spelling as specified in the Schema in the Data Analyzer). These are the attributes in my SAV file:
securitysystems,endpoints,entitlementtype,entitlement_value,entitlementid,status,customproperty30
These are the headers in my CSV file:
securitysystems endpoints entitlementtype entitlement_value entitlementid status customproperty30
Upon running the data import job I got the following error message in the job log: "ERROR column count mismatch SAV file column count is greater or less than csv file column count." It did create the new test entitlement, but there is no ENTITLEMENTID value populated. The use of this attribute is not specified in the schema based upload guide, and my testing seems to indicate that it will not recognize it. Can you confirm?
08/06/2024 10:23 AM
Share input file sample file
08/06/2024 10:42 AM - edited 08/06/2024 10:43 AM
I performed a repeat of the import with an 'operation' column added with the action 'Update' specified. The above error, "ERROR column count mismatch" did not re-occur, but it did not populate the ENTITLEMENTID. It did update another field I used for testing on the same record. Otherwise, this CSV is identical to the one I used previously.
08/06/2024 10:50 AM
08/06/2024 11:03 AM
I deleted line 3 (though I saw nothing there), and reperformed the import. My test field (customproperty30) was updated accordingly, but the ENTITLEMENTID was not.
08/06/2024 08:47 PM
Importing entitlement id is not supported from import/schema import
08/07/2024 05:31 AM - edited 08/07/2024 06:34 AM
Thank you, Rushikesh. (Please disregard my pre-edit question, as I see you answered it above).