Dynamo db performance against table size

How we can calculate the performance impact on dynamo respective to the table size. I am assuming to keep data 1 year in active data set . This approach will grow data size 144 GB/Month = 1728 GB / Year . And Using TTL we will delete that data forever. As of now we are planning to set write capacity = 500 and read = 200 , as per our usage. We not want to archive that data to s3. Because it would be a on demand restore.

We want to keep the same on dynamo db to serve. We can use a different table and write on that but before that can we calculate performance impact against size of table.

If we using 500 then it will distribute to the ant the end of year 1728/10 = 173 partition which distribute the capacity. for Write = 0.3 per bucket which is not equals to the 1 write per bucket.

Can you suggest me any approach.

If we can calculate the performance impact respective to the data size of table