A startup is running a pilot preparation of around a hundred sensors to live street noise and air quality in urban areas for three months. It absolutely was noted that each month around 4gb of device data is generated. The corporate uses a load balanced auto-scaled layer of ec2 instances and rds info with five hundred gb standard storage. The pilot was a hit and currently, they need to deploy a minimum of 100k sensors which require to be supported by the backend. The user wishes to store the data for a minimum of a pair of years to research it. What should a user do?
Answer / Rohit Gautam
To handle the increased data storage requirements, the user can consider using Amazon S3 as a cost-effective and scalable solution for long-term data storage. They can also use AWS Glue for data integration and processing, and Amazon Redshift for data warehousing to analyze large amounts of data efficiently.
| Is This Answer Correct ? | 0 Yes | 0 No |
If a user uses amazon cloudfront, is able to use direct hook up with transfer objects from his data centre?
What does a “domain” refer to in amazon swf?
What is elastic block storage (ebs)
What are the different types of events triggered by amazon cloud front?
When ec2 officially launched?
Is it possible to have more than two network interfaces to be attached to ec2 instance?
How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?
What metrics and processes am I able to monitor in enhanced monitoring?
What is Amazon SQS?
What is the difference between alb and nlb?
Which are amazon rds service is related to which topic?
What is vpc peering connection?