AWS Cloud Security
Cloud security at AWS is the core functionality to design a security sensitive system. Understanding AWS Global Infrastructure and comprehensive security assessment will allow you to take advantage of the AWS cloud to scale and innovate while maintaining a secure environment. Even reach a lower cost than in an on-premises environment! Top 10 Most Downloaded AWS Security and Compliance Documents in 2017 (more than three-hundred-page of all documents) has comprehensive information on the knowledge of AWS security and the best practices to implement AWS security. To save your time to learn all these documents, I made this security setup cheatsheet with highlighted important services and tools from AWS security processes.
Overview of Security Processes
Based on AWS Well Architected Framework, incorporating five pillars (operational excellence, security, reliability, performance efficiency, and cost optimization) into your architecture will help you produce stable and efficient systems. The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. There are five best practice areas for security in the cloud:
- Identity and Access Management
- Detective Controls
- Infrastructure Protection
- Data Protection
- Incident Response
Before we discuss the details of each best practice areas on related AWS services, let’s take a look on Shared Responsibility Model to understand how security in the cloud is slightly different than security in your on-premises data centers.
Shared Responsibility Model
This is the key aspect of operating services on AWS cloud: AWS is responsible for securing the underlying infrastructure that supports the cloud, and you’re responsible for anything you put on the cloud or connect to the cloud. AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example:
- AWS Infrastructure as a Service (IaaS), such as Amazon EC2, Amazon VPC, and Amazon EBS, is completely under your control and require you to perform all of the necessary security configuration and management tasks. Customer manages OS and above including security and patches. AWS manages hypervisor and below including physical infrastructure.
- AWS container services (SaaS) like Amazon RDS or Amazon Redshift: AWS manages everything (e.g. launching and maintaining instances, patching the guest OS or database, or replicating databases) except user credentials and account management with Amazon Identity and Access Management (IAM). Recommended to have multi-factor authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with your AWS resources, and setting up API/user activity logging with AWS CloudTrail.
- AWS abstracted services (SaaS), such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms and you access the endpoints to store and retrieve data. Amazon S3 and DynamoDB are tightly integrated with IAM. You are responsible for managing your data (including classifying your assets), and for using IAM tools to apply ACL-type permissions to individual resources at the platform level, or permissions based on user identity or user responsibility at the IAM user/group level. For some services, such as Amazon S3, you can also use platform-provided encryption of data at rest, or platform-provided HTTPS encapsulation for your payloads for protecting your data in transit to and from the service.
Identity & Access Management (IAM)
With IAM, you can centrally manage users, security credentials such as passwords, access keys, and permissions policies that control which AWS services and resources users can access. You create IAM users under your AWS account and then assign them permissions directly, or assign them to groups to which you assign permissions.
AWS account: This is the account that you create when you first signup for AWS. Your AWS account represents a business relationship between you and AWS. You use your AWS account to manage your AWS resources and services. AWS accounts have root permissions to all AWS resources and services, so they are very powerful. Do not use root account credentials for day-to-day interactions with AWS. In some cases, your organization might choose to use several AWS accounts, one for each major department, for example, and then create IAM users within each of the AWS accounts for the appropriate people and resources. Here is the strategy to define account:
|Business Requirement||Proposed Design||Comments|
|Centralized security management||Single AWS account||Centralize information security management and minimize overhead.|
|Separation of production, development, and testing environments||Three AWS accounts||Create one AWS account for production services, one for development, and one for testing.|
|Multiple autonomous departments||Multiple AWS accounts||Create separate AWS accounts for each autonomous part of the organization. You can assign permissions and policies under each account.|
|Centralized security management with multiple autonomous independent projects||Multiple AWS accounts||Create a single AWS account for common project resources (such as DNS services, Active Directory, CMS etc.).Then create separate AWS accounts per project. You can assign permissions and policies under each project account and grant access to resources across accounts.|
IAM groups. Collections of IAM users in one AWS account. You can create IAM groups on a functional, organizational, or geographic basis, or by project, or on any other basis where IAM users need to access similar AWS resources to do their jobs. You can provide each IAM group with permissions to access AWS resources by assigning one or more IAM policies. IAM policy is JSON format document that defines one or more permissions. All policies assigned to an IAM group are inherited by the IAM users who are members of the group.
IAM Users. With IAM you can create multiple users, each with individual security credentials, all controlled under a single AWS account. IAM users can be a person, service, or application that needs access to your AWS resources through the management console, CLI, or directly via APIs.
Each AWS account or IAM user is a unique identity and has unique long-term credentials. There are two primary types of credentials associated with these identities: (1) those used for sign-in to the AWS Management Console and AWS portal pages, and (2) those used for programmatic access to the AWS APIs.
|Username/Passwords||AWS root account or IAM user account login to the AWS Management Console||A string of characters used to log into your AWS account or IAM account. AWS passwords must be a minimum of 6 characters and may be up to 128 characters.|
|Multi-Factor Authentication (MFA)||AWS root account or IAM user account login to the AWS Management Console||A six-digit single-use code that is required in addition to your password to log in to your AWS Account or IAM user account.|
|Access Keys||Digitally signed requests to AWS APIs (using the AWS SDK, CLI, or REST/Query APIs)||Includes an access key ID and a secret access key. You use access keys to digitally sign programmatic requests that you make to AWS.|
|Key Pairs||• SSH login to EC2 instances
• CloudFront signed URLs
|A key pair is required to connect to an EC2 instance launched from a public AMI. The keys that Amazon EC2 uses are 1024-bit SSH-2 RSA keys. You can have a key pair generated automatically for you when you launch the instance or you can upload your own.|
|X.509 Certificates||Digitally signed SOAP requests to AWS APIs • SSL server certificates for HTTPS||X.509 certificates are only used to sign SOAP-based requests (currently used only with Amazon S3). You can have AWS create an X.509 certificate and private key that you can download, or you can upload your own certificate by using the Security Credentials page.|
There are scenarios in which you want to delegate access to users or services that don’t normally have access to your AWS resources. For example, applications that run on an Amazon EC2 instance and that need access to AWS resources such as Amazon S3 buckets or an Amazon DynamoDB table. Or the users from one account (e.g. test environment) might need to cross access resources in the other account (e.g. production environment). Or federated users might already have identities outside of AWS, such as in your corporate directory. You need to define IAM role for these scenarios. IAM role is a set of permissions to access the resources that a user or service needs, but the permissions are not attached to a specific IAM user or group. Instead, IAM users, mobile and EC2-based applications, or AWS services (like Amazon EC2) can programmatically assume a role. Assuming the role returns temporary security credentials that the user or application can use to make for programmatic requests to AWS. These temporary security credentials have a configurable expiration and are automatically rotated. Using IAM roles and temporary security credentials means you don’t always have to manage longterm credentials and IAM users for each entity that requires access to a resource.
Encrypt everything where possible to protect data at rest and in transit on the AWS platform. AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses FIPS 140-2 validated hardware security modules(HSMs) to protect the security of your keys. You can use your own key management processes by storing keys in tamper-proof storage, such as Hardware Security Modules (HSMs). Amazon Web Services also provides an HSM service in the cloud, known as AWS CloudHSM. You interact with keys in your AWS CloudHSM cluster similar to the way you interact with your applications running in Amazon EC2. You can use AWS CloudHSM to support a variety of use cases, such as Digital Rights Management (DRM), Public Key Infrastructure (PKI), document signing, and cryptographic functions using PKCS#11, Java JCE, or Microsoft CNG interfaces. Alternatively, you can use HSMs that store keys on premises, and access them over secure links, such as IPSec virtual private networks (VPNs) to Amazon VPC, or AWS Direct Connect with IPSec.
Protecting Data on S3
The summary on S3 protection of data at rest:
- Permissions: Use bucket-level or object-level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure, data integrity compromise, or deletion.
- Versioning: Amazon S3 supports object versions. Versioning is disabled by default. Enable versioning to store a new version for every modified or deleted the object from which you can restore compromised objects if necessary.
- Replication: Amazon S3 replicates each object across all Availability Zones within the respective region. Replication can provide data and service availability in the case of system failure, but provides no protection against accidental deletion or data integrity compromise–it replicates changes across all Availability Zones where it stores copies. Amazon S3 offers standard redundancy and reduced redundancy options, which have different durability objectives and price points.
- Backup: Amazon S3 supports data replication and versioning instead of automatic backups. You can, however, use application-level technologies to back up data stored in Amazon S3 to other AWS regions or to on-premises backup systems.
- Encryption–server side: Amazon S3 supports server-side encryption of user data. Server-side encryption is transparent to the end user. AWS generates a unique encryption key for each object and then encrypts the object using AES-256. The encryption key is then encrypted itself using AES-256-with a master key that is stored in a secure location. The master key is rotated on a regular basis.
- Encryption–client side: With client-side encryption you create and manage your own encryption keys. Keys you create are not exported to AWS in clear text. Your applications encrypt data before submitting it to Amazon S3, and decrypt data after receiving it from Amazon S3. Data is stored in an encrypted form, with keys and algorithms only known to you. While you can use any encryption algorithm, and either symmetric or asymmetric keys to encrypt the data, the AWS-provided Java SDK offers Amazon S3 client-side encryption features. See References and Further Reading for more information.
Protecting Data in Transit to Amazon S3: S3 is accessed over HTTPS. When the AWS service console is used to manage Amazon S3, an SSL/TLS secure connection is established between the client browser and the service console
endpoint. All subsequent traffic is protected within this connection. When Amazon S3 APIs are used directly or indirectly, an SSL/TLS connection is established between the client and the Amazon S3 endpoint, and then all subsequent HTTP, and user payload traffic is encapsulated within the protected session.
Protecting Data on EBS
Amazon EBS is the AWS abstract block storage service. You receive each Amazon EBS volume in raw, unformatted mode, as if it were a new hard disk. You can partition the Amazon EBS volume, create software RAID arrays, format the partitions with any file system you choose, and ultimately protect the data on the Amazon EBS volume. The summary of EBS protection of data at rest:
- Replication: Each Amazon EBS volume is stored as a file, and AWS creates two copies of the EBS volume for redundancy. Both copies reside in the same Availability Zone, however, so while Amazon EBS replication can survive hardware failure; it is not suitable as an availability tool for prolonged outages or disaster recovery purposes. We recommend that you replicate data at the application level, and/or create backups.
- Backup: Amazon EBS provides snapshots that capture the data stored on an Amazon EBS volume at a specific point in time. If the volume is corrupt (for example, due to system failure), or data from it is deleted, you can restore the volume from snapshots. Amazon EBS snapshots are AWS objects to which IAM users, groups, and roles can be assigned permissions so that only authorized users can access Amazon EBS backups.
- Encryption: Microsoft Windows EFS: If you are running Microsoft Windows Server on AWS and you require an additional level of data confidentiality, you can implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions. EFS is an extension to the NTFS file system that provides for transparent file and folder encryption and integrates with Windows and Active Directory key management facilities, and PKI. You can manage your own keys on EFS.
- Encryption: Microsoft: Windows BitLocker is a volume (or partition, in the case of a single drive) encryption solution included in Windows Server 2008 and later operating systems. BitLocker uses Windows Bitlocker AES 128- and 256-bit encryption. By default, BitLocker requires a Trusted Platform Module (TPM) to store keys; this is not supported on Amazon EC2. However, you can protect EBS volumes using BitLocker, if you configure it to use a password.
- Encryption: Linux dmcrypt: On Linux instances running kernel versions 2.6 and later, you can use dmcrypt to configure transparent data encryption on Amazon EBS volumes and swap space. You can use various ciphers, as well as Linux Unified Key Setup (LUKS), for key management.
- Encryption: TrueCrypt : TrueCrypt is a third-party tool that offers transparent encryption of data at rest on Amazon EBS volumes. TrueCrypt supports both Microsoft Windows and Linux operating systems.
- Encryption and integrity authentication: SafeNet ProtectV: SafeNet ProtectV is a third-party offering that allows for full disk encryption of Amazon EBS volumes and pre-boot authentication of AMIs. SafeNet ProtectV provides data confidentiality and data integrity authentication for data and the underlying operating system.
Protecting Data on Amazon RDS
At Rest: Amazon RDS leverages the same secure infrastructure like Amazon EC2. You can use the Amazon RDS service without additional protection, but if you require encryption or data integrity authentication of data at rest for compliance or
other purposes, you can add protection at the application layer, or at the platform layer using SQL cryptographic functions. For example, MySQL cryptographic functions include encryption, hashing, and compression. Oracle Transparent Data Encryption is supported on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model. Microsoft Transact-SQL data protection functions include encryption, signing, and hashing.
In transit: If you’re connecting to Amazon RDS from Amazon EC2 instances in the same region, you can rely on the security of the AWS network, but if you’re connecting from the Internet, you might want to use SSL/TLS for additional protection. SSL/TLS provides peer authentication via server X.509 certificates, data integrity authentication, and data encryption for the client-server connection.
Protecting Data on Amazon Glacier
Data is stored in Amazon Glacier in “archives.” At Rest: All data stored on Amazon Glacier is protected using server-side encryption with AES-256.
Protecting Data on Amazon DynamoDB
At Rest: Amazon DynamoDB is a shared service from AWS. You can use DynamoDB without adding protection, but you can also implement a data encryption layer over the standard DynamoDB service.
In Transit: If you’re connecting to DynamoDB from other services from AWS in the same region, you can rely on the security of the AWS network, but if you’re connecting to DynamoDB across the Internet, you should use HTTP over SSL/TLS (HTTPS) to connect to DynamoDB service endpoints. Avoid using HTTP for access to DynamoDB, and for all connections across the Internet.
Decommission Data and Media Securely
When storage device reaches EoL, AWS procedures include the decommissioning process to prevent customer data from being exposed in the cloud. All decommissioned magnetic storage devices are degaussed and physically destroyed.
AWS provides several security capabilities and services to increase privacy and control network access. These include:
- Built-in firewalls that allow you to create private networks within AWS e.g. Virtual Private Cloud (VPC), and control network access to your instances and subnets e.g. routing table, security groups, and Network Access Control Lists (NACLs).
- Encryption in transit with TLS across all services e.g. HTTPS (HTTP over SSL/TLS) with server certificate authentication, SSL/TLS over VPC-IPSec links.
- Connectivity options that enable private, or dedicated, connections from your office or on-premises environment e.g. leverage Amazon VPC-IPSec or VPC-AWS Direct Connect to seamlessly integrate on-premises or other hosted infrastructure with your Amazon VPC resources in a secure fashion.
- DDoS mitigation technologies as part of your auto-scaling or content delivery strategy. List of the common approaches for DoS/DDoS mitigation and protection in the cloud:
|Technique||Description||Protection from DoS/DDoS Attacks|
|Firewalls: Security groups, network access control lists, and host-based firewalls||Traditional firewall techniques limit the attack surface for potential attackers and deny traffic to and from the source of destination of attack.||
|Web application firewalls (WAF)||Web application firewalls provide deep packet inspection for web traffic||
Please review Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities e.g. Injection, Broken Authentication and Session Management, Cross-Site Scripting (XSS), Broken Access Control, Security Misconfiguration, Sensitive Data Exposure, Insufficient Attack Protection, Cross-Site Request Forgery (CSRF), Using Components with Known Vulnerabilities, Underprotected APIs
|Host-based or inline IDS/IPS systems||IDS/IPS systems can use statistical/behavioral or signature-based algorithms to detect and contain network attacks and Trojans.||All types of attacks|
|Traffic shaping/rate limiting||Often DoS/DDoS attacks deplete network and system resources. Rate-limiting is a good technique for protecting scarce resources from overconsumption.||
|Embryonic session limits||TCP SYN flooding attacks can take place in both simple and distributed form. In either case, if you have a baseline of the system, you can detect considerable deviations from the norm in the number of half-open (embryonic) TCP sessions, and drop any further TCP SYN packets from the specific sources.||
VPC is a very important service in AWS with many options to protect your applications. Please review the post AWS Virtual Private Cloud with overall functionalities, step-by-step Multi-AZ, multi-subnet VPC setup included its security settings, and how to manage its high availability and disaster recovery.
Detective Controls and Incident Response
As important as credentials and encrypted endpoints are for preventing security problems, AWS detective controls tools and incident response services are for identifying and eliminating a security breach. AWS provides tools and features include:
- Deep visibility into API calls, including who, what, when, and from where calls were made
- Log aggregation and options, streamlining investigations and compliance reporting
- Alert notifications when specific events occur or thresholds are exceeded
The following AWS Services which can help:
- CloudTrail: CloudTrail is for auditing your calls by providing a log of all requests for AWS resources within your account. CloudTrail captures information about every API call to every AWS resource you use, including sign-in events. You can configure CloudTrail so that it aggregates log files from multiple regions into a single Amazon S3 bucket. From there, you can then upload them to our favorite log management and analysis solutions to perform security analysis and detect user behavior patterns. By default, log files are stored securely in Amazon S3, but you can also archive them to Amazon Glacier to help meet audit and compliance requirements.
- CloudWatch: CloudWatch is for logging by collecting and monitoring system, application, and custom log files from your EC2 instances and other sources in near-real time. You can create custom dashboards all CloudWatch metrics (default metrics are Network, Disk, CPU, and Status check.) CloudWatch alarms can set notifications when particular thresholds are hit. Then CloudWatch events help you respond to state changes e.g. run Lambda function to response the events.
- AWS Trusted Advisor: AWS Trusted Advisor Inspects your AWS environment and makes the recommendation to save money, improve performance, fault-tolerant architecture or close security gaps. It provides four checks at no additional charge to all users, including three important security checks: specific ports unrestricted, IAM use, and MFA on the root account.
- CloudFormation: CloudFormation uses for failure management and incident response. CloudFormation is an AWS provisioning tool that lets customers record the baseline configuration of the AWS resources needed to run their applications so that they can provision and update them in an orderly and predictable fashion. CloudWatch allows you to monitor the operational health of a workload during the operating time. AWS CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications.
- Hands-on with DynamoDB
- AWS Data Warehouse – Build with Redshift and QuickSight
- AWS Relational Database Solution: Hands-on with AWS RDS
- Which is Right Hadoop Solution for You?
- Apache Hadoop Ecosystem Cheat Sheet
- Data Storage for Big Data: Aurora, Redshift or Hadoop?
- AWS Kinesis Data Streams vs. Kinesis Data Firehose
- Streaming Platforms: Apache Kafka vs. AWS Kinesis
- AWS Machine Learning on AWS Redshift Data
- Why Use AWS Redshift Spectrum with Data Lake
- How to Design AWS DynamoDB Data Modeling
- When Should Use Amazon DynamoDB Accelerator (AWS DAX)?
- Web Application with Aurora Serverless Cluster
- Top IT Certifications for 2018
- How I Passed AWS CSAA in 3 Months
- How to Pass AWS Certified Big Data Specialty
- AWS Elastic Beanstalk or AWS Elastic Container Service for Kubernetes (AWS EKS)
- How to Use AWS CodeStar to Manage Lambda Java Project from Source to Test Locally