Repository Specific Guides
...
Installation and Base Configur...
Follow up Basis Tasks
Archiving API Gateway Log/Tables
6min
this tool allows you to maintain and archive api gateway log tables that may grow in size over time in particular the below tables contain the bulk of the gateway data /dflow/zapil01 cms api log header /dflow/zapil06 cms api log metadata /dflow/zapil10 cms api log return it is highly recommended that you determine how long you would like to keep a line item level log vs header detail log and then create/schedule a background job to delete anything greater (either nightly or weekly) you have the choice to delete line item data (the highest volume by a hundred fold) and header data (less volume) as well as saving the entries to an export file as part of the deletion process line item detail is the data contained behind any drill down activity, eg viewing metadata, viewing the full return table, etc we recommend 1 year for line item detail and 5 years for header detail please observe foreground vs background file locations (sap app/db server vs user presentation server) to archive g gateway, first create a logical filename as shown below and then set up a background job with a variant via the following steps a create the logical filename "zglink api gateway saved log files" via transaction file create the logical filename "zglink api gateway saved log files" via transaction file assign to a logical path of the same name and configure that logical path to the directory to save the log files the physical path must have the filename placeholder as \<filename> b create a variant on the "archive gateway data" selection screen to determine what to archive from the gateway logs create a variant on the "archive gateway data" selection screen to determine what to archive from the gateway logs a go to the gimmal link enterprise system dashboard by utilizing transaction /dflow/sys b select the “archive gateway data” menu option under the “execute” menu c enter in the criteria to determine what data to archive we suggest leaving the last year’s worth of detail information and leaving all header information intact initially only the date and max header rows filters will need to have information entered note that in order to do advanced date calculations you will need to adjust on the “save as variant…” screen setting description sample value(s) cms profile the content management system profile to limit archiving on cms use tag the use tag to limit archiving on cms repository type the content management system repository type to limit archiving on cms rfc destination the remote function call (rfc) destination to limit archiving on api source the source of the api calls to limit archiving on api type the particular api call type to limit archiving on sap object id the particular sap archivelink object keys to limit archiving on document type the sap archivelink document types to limit archiving on unique entry key the unique entry in the gateway log to limit archiving on date the date range to limit archiving to it is recommended to configure this at a minimum to limit how much data is archived time the time range to limit archiving on user name the sap user name to limit archiving on cms message type the particular message types from the logs to limit archiving on cms sap user the sap user name to limit archiving on cms rep /store the content management system repository or object store to limit archiving on cms id the content management system id(s) to limit archiving on cms filename the content management system filename(s) to limit archiving on cms type the content management system document class, object type or metadata template to limit archiving on cms property the content management system property that a choice list is defined to be configured against to limit archiving on cms choice list prefix the content management system choice list prefix to limit archiving on test only if selected the archiving isn’t actually performed and the process is just simulated delete headers – not recommended if selected the header information from the gateway log will be deleted as well typically only the detail information needs to be deleted for space saving reasons so try to leave the headers when possible max header rows this determines how many at one given time are checked to see if their data should be archived this helps for performance reasons to help limit the amount of processing we suggest trying a value representative of the amount of processing that goes on in the given system 500 save selected entries optionally if selected the archived entries can be stored in an output file for later analysis pc file path (online execution only) when this program is executed manually from the menu path and not via a scheduled job this determines the output file location for the archived data dataset file path when this program is executed in the background (potentially by the scheduled job) a dataset file is used for storing the archived data d save the criteria as a variant that can be used by the scheduled job a select "save as variant " from the "goto" menu under the "variants" b adjust the date range by selecting "x\ dynamic date calculation (system da\ate)" for the selection variable of the "date" field name c configure the "name of variable (input only using f4)" to select the particular date range to archive a we suggest selecting i for include (the i in the i/e column) with the lt (for less than) option the "current date +/ ??? days" selection from the choices b enter 365 for the ??? value c this archives all non header data older than a year d make sure to enter a unique variant name that will be needed when scheduling the job ex archivegateway e also enter a description for the information entered ex archive the gateway logs c configure a scheduled job to archive the gateway data periodically configure a scheduled job to archive the gateway data periodically go to transaction sm36 to configure a new background job to archive the data enter a unique name for the job that reflects the archiving of gimmal link enterprise data ex zglink archive gateway define a step to call the particular archive program with the variant defined previously define an abap program by pressing the “abap program” button enter “/dflow/zcmsapi gtw archive” for the “name” select the variant name previously defined in the “variant” field then finally save the step information by clicking on the floppy disk at the bottom green arrow back to view the background job and you should see that a job step was successfully defined define the start condition by clicking on the “start condition” button along the top press the “date/time” button along the top and define a start date and time for the archiving background job we suggest starting the job off hours at night or on the weekend the job shouldn’t affect any current processing so it isn’t required to run off hours but just suggested press the “period value” at the bottom to make the job run periodically to archive the gateway logs we suggest running the archiving program weekly so that the job performs quickly and logs are maintained fairly regularly finally save the start condition by clicking on the floppy disk at the bottom confirm that the background job now mentions the planned start time along with how frequently it will be run save the background job so that the archiving of gateway logs can occur