The only way today to upload a DXP file to the library in TSCL is using the PC Client. That means a number of drawbacks:
a) It can not be automated, the process is entirely manual.
b) Large files takes long time to upload
c) Only users having PC client can upload files larger than 1 Gbyte, depending on network conditions it might even be less.
Some figures: It takes like 20 minutes to upload a 2 Gbyte file from a PC to the TSCL environment. During this time the user can not use the client at all. In places with lower bandwith the upload can take even longer time. Loading the same file to a AWS region using the AWS Console function takes less than 8 minutes and with the newly introduced S3 transfer acceleration function under four. So it is in practice possible to shorten the upload times a factor of five using standard AWS functions.
If it would be possible to upload a file to a S3 bucket and let the library function automatically register it in the library database (e.g. using a trigger from S3) we could also automate upload of data using standard AWS tools and APIs.
This would e.g mean that it would be possible to write a script that extracts a datatable out from a on-prem database as a batch job each night and then write a small program (which might already exist) that moves that file to a S3 bucket. The file is then registered in the library automatically and the users can immediately have access to it. A first very small step to automation.
Today, as all uploads needs to be manual initiated and supervised regular updates are unpractical.