A customer has a web application that uses cookie-based sessions to track logged-in users. It is deployed on AWS using Elastic Load Balancing and Auto Scaling. When load increases, Auto Scaling launches new instances, but the load on the other instances does not decrease, this causes all existing users to have a slow experience.
What could be the cause of the poor user experience?
- The ELB DNS record’s TTL is set too high.
- The new instances are not attached to an additional ELB which is needed.
- The website uses the dynamic content feature of Amazon’s CliodFront which is keeping connections alive to the ELB
- The ELB is continuing to send requests with previously established sessions. (Ans)
Elastic Map Reduce allows you to access the underlying operating systems of the EMR nodes.
- True (Ans)
- False
What are the two types of Elastic Load Balancer Sticky Sessions?
- Duration based session stickiness and server side session stickiness.
- Server side session stickiness and client side session stickiness.
- Duration based session stickiness and application-controlled session stickiness. (Ans)
- Application-controlled session stickiness and server side session stickiness.
You are a SysOps engineer at a start up that is growing incredibly fast. The start up has a fleet of EC2 instances inside an autoscaling group that autoscales currently based on CPU Utilization. You notice that CPU Utilization is not a good metric and that the main bottle neck is the number of connections to an EC2 instance from the ELB being maxed out. You want to adjust your Autoscaling configuration to address this bottleneck. What two ELB metrics should you consider using?
- SurgeQueueLength & RequestCount
- Latency & RequestCount
- RequestCount & SpilloverCount
- SurgeQueueLength & SpilloverCount (Ans)
You are a SysOps Administrator at a fast growing start up which has scripted most of their infrastructure. You have a fleet of EC2 instances behind an elastic load balancer. When a new instance is launched, it performs a number of system updates before automatically copying the websites code from an S3 bucket. Due to the number of steps taken when launching a new instance it can sometimes take up to 5 minutes for the new instance to be a fully functioning web server. This length of time is now causing a problem as the Elastic Load Balancer reports the new instance as unhealthy and your autoscaling group then deletes it before it can become live. What should you do to prevent this?
- Reduce the number of automated steps so that the instance provisions faster and becomes healhty faster.
- Get rid of autoscaling and just add new instances in manually as and when your need them.
- Adjust the health check on your elastic load balancer so that an instance is considered healthy within 10 seconds of it serving http trafic. (Ans)
- Pre-warm the elastic load balancer so that it can handle more requests faster.
You have OS level access to RDS.
- True
- Flase (Ans)
You have OS level access to Opsworks instances by default, however you can choose not to manage this.
- True (Ans)
- Flase
You are a SysOps Administrator for an events company which is launching a new TV show tomorrow. You are expecting that traffic to your website tomorrow will be huge. You have created an autoscaling group and have a combination of Reserved Instances and On Demand instances ready. You are about to contact AWS support to ask them to prewarm your ELB in order to meet this demand. Typically AWS require 3 pieces of information. Which of the below is NOT information that AWS require?
- The start and end dates of your expected surge in traffic.
- The expected request rate per second.
- The total size of the typical request/response that your will be experiencing.
- The traffic type (HTTP or HTTPS) (Ans)
You need to implement a tiered storage system for your database backups and logs. At the end of each day the backups need to be archived to cheaper storage but must be retained for compliance reasons.
Which tiered storage proposal meets the recovery scenario, minimises cost, and addresses the compliance requirements?
- Use an independent EBS volume and daily snapshots to store the backups and log files. After 14 days rotate your EBS snapshots.
- At the end of the day, copy backup your database and copy the backup files to S3. After 14 days copy the data from S3 to Amazon RDS.
- Use Amazon EC2 ephemeral storage volumes for daily backups and log files. After 14 days copy the backup files to Amazon EBS.
- Use an independent EBS volume to store daily backups and copy the files to S3. Configure your Amazon S3 buckets with a lifecycle policy to archive files older than 14 days to Amazon Glacier. (Ans)
When working with Amazon RDS, by default AWS is responsible for implementing which two management-related activities?
- Importing data and optimizing queries & creating and maintaining automated database backups in compliance with regulatory long-term retention requirements.
- Installing and periodically patching the database software & creating and maintaining automated database backups with a point-in-time recovery of up to five minutes. (Ans)
You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 4kB IOPS.
Which EC2 option will meet this requirement?
- EBS provisioned IOPS
- SSD instance store
- EBS optimized instances
- High Storage instance configured in RAID 10 (Ans)
You run a stateless web application with the following components: Elastic Load Balancer (ELB), 3 Web/Application servers on EC2, and 1 MySQL RDS database with 5000 Provisioned IOPS. Average response time for users is increasing. Looking at CloudWatch, you observe 95% CPU usage on the Web/Application servers and 20% CPU usage on the database. The average number of database disk operations varies between 2000 and 2500. What two options should you do?
- Choose a different EC2 instance type for the Web/Application servers with a more appropriate CPU/memory ratio & Use Auto Scaling to add additional Web/Application servers based on a CPU load threshold. (Ans)
- Use Auto Scaling to add additional Web/ Application servers based on a CPU load threshold & Use Auto Scaling to add addtional Web/ Application servers based on a memeory usage threshold
- Use Auto Scaling to add additional Web/Application servers based on a CPU load threshold & Increase the number of open TCP connections allowed per web/application EC2 instance.
- Increase the number of open TCP connections allowed per web/application EC2 instance & Use Auto Scaling to add additional Web/Application servers based on memory useage threshold.
Which features can be used to restrict access to data in S3? Pick the best two answers.
- Create a CloudFront distribution for the bucket & Set an S3 bucket policy.
- Create a CloudFront distribution for the bucket & Use S3 Virtual Hosting.
- Set an S3 bucket policy & Set an S3 an ACL on the bucket or the object. (Ans)
- Use S3 Virtual Hosting & Set ACL on the bucket or the object.
- Set an S3 ACL on the bucket or the object & Enable IAM Identity Federation.
You need to establish a backup and archiving strategy for your company using AWS. Documents should be immediately accessible for 3 months and available for 5 years for compliance reasons.
Which AWS service fulfills these requirements in the most cost effective way?
- Use StorageGateway to store data to S3 and use life-cycle policies to move the data into Redshift for long-time archiving
- Use DirectConnect to upload data to S3 and use IAM policies to move the data into Glacier for longtime archiving
- Upload the data on EBS,use life-cycle policies to move EBS snapshots into S3 and later into Glacier for long-time archiving
- Upload data to S3 and use life-cycle policies to ove the data into Glacier for long-time archiving. (Ans)
Given the following IAM policy:
{
“Version”: “2012-10-17”,
“Statement”:
[
{ “Effect”: “Allow”,
“Action”: [
“s3:Get“, “s3:List“
],
“Resource”: “*”
},
{
“Effect”: “Allow”,
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::corporate_bucket/*”
}
]
}
What does the IAM policy allow? (Pick 3 correct answers)
- The user is allowed to read object from all S3 buckets owned by account,
The user is allowed to write objects into the bucket named ‘corporate_bucket’,
The user is allowed to read objects from the bucket named ‘corporate_bucket’. (Ans) - The user is allowed to write objects into the bucket named ‘corporate_bucket’,
The user is allowed to change access rights for the bucket named corporate_bucket’, The user is allowed to read objects from the bucket named ‘corporate_bucket’, The user is allowed to read objects in the bucket named ‘corporate_bucket’ but not allowed to list the objects in the bucket - The user is allowed to read objects from all S3 buckets owned by the account,
The user is allowed to change access rights for the bucket named
‘corporate_bucket’, The user is allowed to read objects in the bucket named
‘corporate_bucket’ but not allowed to list the objects in the bucket - The user is allowed to read objects from all S3 buckets owned by the account,
The user is allowed to write objects into the bucket named ‘corporate_bucket’,
The user is allowed to change access rights for the bucket named ‘corporate_bucket’.
Which AWS service does NOT have automated backups included as standard?
- RDS
- Elasticache (Redis Only)
- Redshift
- EC2 (Ans)
Ephemeral storage is
- Permanent
- Persistent
- Temporary/Non Persistent (Ans)
Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
- May be performed by the customer against their own instances, only if performed from EC2 instances.
- may be performed by AWS, and is periodically performed by AWS.
- may be performed by AWS, and will be performed by AWS upon customer request.
- are expressly prohibited under all circumstances.
- may be performed by the customer against their own instances with prior authorization from AWS. (Ans)
Which features can be used to restrict access to data in S3?
- Create a CloudFront distribution for the bucket.
- Set an S3 bucket policy. (Ans)
- Enable IAM Identity Federation.
- Use S3 Virtual Hosting.
You have an amazon VPC with one private subnet, one public subnet and one network address translation server (NAT). You are creating a group of Amazon Elastic Cloud Compute (EC2) instances that configure themselves at to deploy an application via GIT.
Which setup provides the highest level of security?
- Amazon EC2 instances in private subnet, no EIP’s, route outgoing traffic via the NAT. (Ans)
- Amazon EC2 instances in public subnet, no EIP’s, route outgoing traffic via the internet gateway (IGW)
- Amazon EC2 instances in a private subnet, assign EIP’s, route outgoing traffic via the internet gateway (IGW)
- Amazon ec2 instances in public subnet, assign EIP’s, route outgoing traffic via the NAT.
Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What is the reason for this?
- The routine table of subnet A has no target route to subnet B
- The security group attached to instance B does not allow inbound ICMP traffic or The NACL on subnet B does not allow outbound ICMP traffic. (Ans)
- The policy linked to the IAM role on instance A is not configured correctly
- Gradle versions, their supported Java versions, and unsupported Java versions - December 23, 2024
- An Introduction of GitLab Duo - December 22, 2024
- Best Hospitals for affordable surgery for medical tourism - December 20, 2024