Kubernetes is an open-source orchestrator
originally built by Google, now owned and managed by the Cloud Native Computing
Foundation (CNCF). Kubernetes is
famous for extensibility and for no-downtime rolling updates. Most public and
private cloud platforms have push-button mechanisms to spin up a Kubernetes
cluster, pooling provisioned cloud resources together into a single platform
that can be controlled together.
Kubernetes has made a name for itself as the
cloud-native platform for running containers. As a user unfamiliar with this
platform, you can find this guide helpful in getting acquainted with the
various components of Kubernetes, and getting started running your containers
here. As your needs grow, from stateless services to stateful data stores or
custom installations, Kubernetes offers extension points that allow you to
replace pieces with community extensions or custom-built components.
What is an
Orchestrator?
Kubernetes is an example of an orchestrator —
machinery for bringing a bunch of machines together to act as a single unit(cluster).
Imagine we collect a dozen or a hundred machines. Though we could connect to
each one remotely to configure it, we would rather just tell the orchestrator
an instruction and have it control each machine for us.
Examples of
Orchestrators
There are many different brands of orchestrators, both open-source and proprietary. Examples include Kubernetes, Docker Swarm, Azure Service Fabric, Amazon Cluster Service, and Mesosphere DC/OS. All of these orchestrators can solve the “what to run where” problem.
Kubernetes
Architecture
In an exceptionally oversimplified explanation
1. The Control Plane are all the machines that manage the cluster; All the green boxes. (If this were an organization chart, it would be labeled “management.”) We can think of this as all the plumbing in the system. The work of your web properties and data stores is not run here. You will generally want a few machines doing this work — three or five or nine. In most cloud-provided k8s clusters, these machines are free.
2. The Worker Nodes are all the machines doing work for you and your business; All the blue boxes. These machines run your web properties, back-end services, scheduled jobs, and data stores. (You may choose to store the actual data elsewhere, but the engine that runs the data store will run here).
3. As a developer or ops engineer, you will likely use kubectl, the command-line utility for Kubernetes, to start, stop, and change content in the cluster.
4. kubectl connects to the kubernetes API server to give it instructions.
5. The API server stores data in etcd, the kubernetes data store.
6. The Controller Manager polls against the API, and notices a change in etcd.
7. The Controller Manager directs the Scheduler to make a change to the environment, and the Scheduler picks a Worker Node (one of the blue boxes) to do the work.
8. The Scheduler tells the Kubelet to make the necessary change. The Kubelet is responsible for the node (machine).
9. The Kubelet fires up a Pod, and runs Docker commands to run the container.
10. cAdvisor watches the running pods, reporting events back to the API that get stored in etcd.
11. As a user visiting a website, the traffic comes in through the Internet, and through a load balancer, which chooses one of the worker nodes (machines) to run the content.
12. The traffic forwarded to Kube-Proxy.
13. Kube-Proxy identifies which Pod should receive the traffic and directs it there.
14. The Pod wraps the container, which processes the request, and returns the response.
15. The response flows back to the user across the Kube-Proxy and the Load Balancer.
I am reading your block for the first time. It is really good and you simplified the way of underatnding!!!
ReplyDelete