Sunday, May 20, 2018

AWS Lambda and the Serverless Cloud


Lambda manages capacity planning, provisioning, auto scaling and updates without interventention

You get all these features but you loose control over platform, you have to use AMAZON OS
The idea of lambda is out sourcing your compute, so that you can focus on your application
Lambda is a function as a tool service or FAAS

WHY LAMBDA?
Use Cases

·      ETL Jobs: These are data driven jobs, which take in some kind of S3 files, something from external service like a DB and perform some kind of operations on it.

·      API(API Gateway):it provides the front end to lambda function,so that you can accept http request, send responses without having to have a webserver


·      Mobile backends: Either as a api or directly from AWS sdk on your mobile device , depending upon how you would like to develop

·      Infrastructure Automation: You can use it to make your infrastructure smarter. I would like to use lambda as a smart glue for the infrastructure because it ties in well with other AWS services, thus making it really easy to develop little pieces of automation


·      Data validation: with Dynamo DB, you can use it as  storage procedure


AWS Lambda How It Works

 Under the hood
When using lambda, you are using containers that are running on EC2 instances. These EC2 instances are not accessible by you. In terms of management and administration, they download your code from lambda and run it inside isolated container.
So it looks to you that you have isolated machine, even though there maybe many lambda functions executed at the same time on that machine

These containers force resource isolation, so you are only allowed to use the amount of RAM you have configured and they enforce 100ms execution limits for the purpose of billing and also the time outs.
Once created the lambda function, you ca set time outs and at the end of the time out container will be killed regardless of your process

 Anatomy of an application

·      Code- your application consists of code and any dependencies of the code such as libraries
·      Event- event that comes from external source
·      Output-Output is what your functions provides whether it send it to external service or returns it as a\the output of the function
 Event Sources
In the above anatomy of application, we discusses Events as part of an application
There are many event sources. Some of them are:
·      Schedules: You can use schedules similar to cronjob
·      S3 Events: which hand information about new file, updated file or delete file in to lambda
·      DyanmoDB streams:  is another way you can interact dynamo with lambda. When change is made to DyanmoDB, that change is given to lambda function to trigger
·      Kinesis streams: are similar to dyanmoDB streams but they can be added to arbitrary services using kinesis SDK
·      SNS topics: are simple notification service for messages that come from web tools or any other sources
·      API gateway: Is the mapping between http request that might be coming in & events sent to your lambda functions
·      SDK invocationsL From any AWS SDK, you can invoke a standard lambda function










Accessing Amazon CloudWatch Logs for AWS Lambda
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/<function name>.
You can view logs for Lambda by using the Lambda console, the CloudWatch console, the AWS CLI, or the CloudWatch API.


Using AWS Lambda with AWS CloudTrail
You can enable CloudTrail in your AWS account to get logs of API calls and related events history in your account. CloudTrail records all of the API access events as objects in your Amazon S3 bucket that you specify at the time you enable CloudTrail.
You can take advantage of Amazon S3's bucket notification feature and direct Amazon S3 to publish object-created events to AWS Lambda. Whenever CloudTrail writes logs to your S3 bucket, Amazon S3 can then invoke your Lambda function by passing the Amazon S3 object-created event as a parameter. The S3 event provides information, including the bucket name and key name of the log object that CloudTrail created. Your Lambda function code can read the log object and process the access records logged by CloudTrail. For example, you might write Lambda function code to notify you if specific API call was made in your account.
In this scenario, you enable CloudTrail so it can write access logs to your S3 bucket. As for AWS Lambda, Amazon S3 is the event source so Amazon S3 publishes events to AWS Lambda and invokes your Lambda function 




Tuesday, March 6, 2018

Docker Swarm

With Docker Swarm,you can create and manage Docker clusters. Swarm can be used to disperse containers across multiple hosts.
Once youhave the Docker daemon installed, installing Docker Swarm will be simple:

docker pull swarm

three components of Docker Swarm:
Swarm
Swarm manager
Swarm host


Docker Swarm is the container that runs on each Swarm host. Swarm uses a unique token for each cluster to be able to join the cluster. The Swarm container itself is the one that communicates on behalf of that Docker host to the other Docker hosts that are running Docker Swarm as well as the Docker Swarm manager.

Swarm manager
The Swarm manager is the host that is the central management point for all the Swarm hosts. The Swarm manager is where you issue all your commands to control nodes. You can switch between the nodes, join nodes, remove nodes, and manipulate the hosts

Swarm host
 Swarm hosts, are those that run the Docker containers. The Swarm host is managed from the Swarm manager.


Docker Swarm usage


  •  Creating a cluster
  •  Joining nodes 
  • Removing nodes 
  • Managing nodes

The detail commands for Docker Swarm are in below link

Saturday, February 17, 2018

Kubernetes: A start





Kubernetes is an open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

Kubernetes has a number of features. It can be thought of as:

  • a container platform
  • a microservices platform
  • a portable cloud platform and a lot more

.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.

Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring.
  • Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
  • Does not deploy source code and does not build your application. 
  • Does not provide application-level services, such as middleware (e.g., message buses), data-processing frameworks (for example, Spark), databases (e.g., mysql), caches, nor cluster storage systems (e.g., Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.
  • Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.


With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.




Let's break down some of the more common terms to help you understand Kubernetes.

Master: The machine that controls Kubernetes nodes. This is where all task assignments originate.

Node: These machines perform the requested, assigned tasks. The Kubernetes master controls them.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.

Replication controller:  This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.

kubectl: This is the command line configuration tool for Kubernetes.

Featured Post

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.Route 53  perform three main functions in any...