I am writing to a DynamoDB table with 15k WCU and auto scaling enabled from a AWS Lambda. However, I get upto 40k throttled write requests and my use capacity only reaches 5k WCU. The problem is that my data is heavily skewed and the partition key and the GSI on the table leads to hot keys. I can’t change the design of the table since other applications depend on the table and it is already in production. Are there any ways that I can uniformly distribute the workload by maybe like this one https://datarus.wordpress.com/2015/05/04/fighting-the-skew-in-spark/ used in spark that I can use to distribute my workload more effectively?