Custom Data Uploads
This page describes the the steps to follow when uploading a custom data feed to Fredhopper.
Last updated
This page describes the the steps to follow when uploading a custom data feed to Fredhopper.
Last updated
If you are implementing a custom feed for additional data enrichment you will need to send these to the upload service.
The following section details the standard data input format that is expected by the Upload service. It also explains how these files should be transmitted to Fredhopper's Managed Services environment and for the data to be triggered on the instance.
Once the additional data has been successfully uploaded and processed, it will be integrated with the next product feed update (full or incremental). The same custom data will be maintained for all subsequent load-data triggers until a new version of the custom data is uploaded using this process.
These are the steps to follow when uploading data to Fredhopper, this is the same for Item data, suggest data, and custom uploads.
The file names and URL's to upload too will differ and are covered in the examples.
Create a data.zip file containing the data you need to upload to Fredhopper.
Generate an value zip file.
This checksum we'll use to validate the upload.
Upload the zip file to the Fredhopper Managed Services environment that is given to you by your Technical Consultant using the 'fas' service interface.
Please note to include the checksum value in the request, as per the example below which is directed to the live1 instance.
Once the file has been uploaded, the system will send a simple HTTP response back with some important information contained within the header and body sections:
You should capture the 'data-id' section in the HTTP response body that you receive from the API as this will be used in subsequent steps. The 'data-id' value is unique to every request.
Thus far, we have only uploaded the data to the Fredhopper Managed Services environment. Now, we shall initiate a trigger instructing Fredhopper to re-index with the new data and you must use the 'data-id' value that you captured in the previous step.
The HTTP response header that you receive back from our system at this stage contains a new 'Location' value, which we can use to monitor the status of the re-index:
The status value for the re-index job can be checked by sending the following command using the 'Location' value that you captured in the previous step:
When complete you should get a success response.
Possible status codes returned are:
Unknown
No known state yet: trigger has not yet been picked up
Scheduled
Trigger has been picked up, and will start execution soon
Running
Triggered job is running currently
Delayed
Triggered job is ready to run, but delayed (eg: due to insufficient capacity)
Success
Triggered job has finished successfully
Failure
Triggered job has failed