Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-best-practices-for-managing-remote-it-teams-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Best Practices for Managing Remote IT Teams from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
With IT teams around the world adjusting to remote working while still struggling to maintain productivity and workflows, ensuring DevOps and Agile practices are in place has never been more important. And while typical DevOps professionals probably have significantly better remote desktop setups than most of their business peers, it’s how IT leaders manage these […] The post Best Practices for Managing Remote IT Teams appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 17828

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 17788

article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 17607

article-image-rookout-launches-a-live-debugging-heatmap-to-find-applications-on-fire-from-devops-com
Matthew Emerick
15 Oct 2020
1 min read
Save for later

Rookout Launches a ‘Live Debugging Heatmap’ To Find Applications On Fire from DevOps.com

Matthew Emerick
15 Oct 2020
1 min read
October 15, 2020 13:00 ET | Source: Rookout According to the 2020 State of Software Quality report, two out of three software developers estimate they spend at least a day per week troubleshooting issues in their code, and close to one-third spend even more time. DEJ’s research shows that organizations are losing $2,129,000 per month, on average, due to delays in application […] The post Rookout Launches a ‘Live Debugging Heatmap’ To Find Applications On Fire appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 17547

article-image-limited-availability-of-digitalocean-kubernetes-announced
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Limited Availability of DigitalOcean Kubernetes announced!

Melisha Dsouza
03 Oct 2018
3 min read
On Monday, the Kubernetes team announced that DigitalOcean, which was available in Early Access, is now accessible as Limited Availability. DigitalOcean simplifies the container deployment process that accompanies plain Kubernetes and offers Kubernetes container hosting services. Incorporating DigitalOcean’s trademark simplicity and ease of use, they aim to reduce the headache involved in setting up, managing and securing Kubernetes clusters. DigitalOcean incidentally are also the people behind Hacktoberfest which runs all of October in partnership with GitHub to promote open source contribution. The Early Access availability was well received by users who commented on the simplicity of configuring and provisioning a cluster. They appreciated that deploying and running containerized services consumed hardly any time. Users also brought to light issues and feedback that was utilized to increase reliability and resolve a number of bugs, thus improving user experience in the limited availability of DigitalOcean Kubernetes. The team also notes that during early access, they had a limited set of free hardware resources for users to deploy to. This restricted the total number of users they could provide access to. In the Limited Availability phase, the team hopes to open up access to anyone who requests it. That being said, the Limited Availability will be a paid product. Why should users consider DigitalOcean Kubernetes? Each customer has their own Dedicated Managed Cluster. This provides security and isolation for their containerized applications with access to the full Kubernetes API. DigitalOcean products provide storage for any amount of data.   Cloud Firewalls make it easy to manage network traffic in and out of the Kubernetes cluster. DigitalOcean provides cluster security scanning capabilities to alert users of flaws and vulnerabilities. In typical Kubernetes environments; metrics, logs, and events can be lost if nodes are spun down. To help developers learn from the performance of past environments, DigitalOcean stores this information separately from the node indefinitely. To know more about these features, head over to their official blog page. Some benefits for users of Limited Availability: Users will be able to provision Droplet workers in many more of regions with full support. To test out their containers in an orchestrated environment, they can start with a single node cluster using a $5/mo Droplet. As they scale their applications, users can add worker pools of various Droplet sizes, attach persistent storage using DigitalOcean Block Storage for $0.10/GB per month, and expose Kubernetes services with a public IP using $10/mo Load Balancers. This is a highly available service designed to protect against application or hardware failures while spreading traffic across available resources. Looks like users are really excited about this upgrade: Source: DigitalOcen Blog Users that have already signed up for Early Access, will receive an email shortly with details about how to get started. To know more about this news, head over to DigitalOcean’s Blog post. Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 17208

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 17075
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation
Bhagyashree R
15 Mar 2019
3 min read
Save for later

Google to be the founding member of CDF (Continuous Delivery Foundation)

Bhagyashree R
15 Mar 2019
3 min read
On Tuesday, Google announced of being one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton. About Continuous Delivery Foundation The formation of CDF was announced at the Linux Foundation Open Source Leadership Summit on Tuesday. CDF will act as a “vendor-neutral home” for some of the most important open source projects for continuous delivery and specifications to speed up the release pipeline process. https://twitter.com/linuxfoundation/status/1105515314899492864 The existing CI/CD ecosystem is heavily fragmented, which makes it difficult for developers and companies to decide on particular tooling for their projects. Also, DevOps practitioners often find it very challenging to gather guidance information on software delivery best practices. CDF was formed to make CI/CD tooling easier and define the best practices and guidelines that will enable application developers to deliver better and more secure software at speed. CDF is currently hosting some of the most popularly used CI/CD tools including Jenkins, Jenkins X, Spinnaker, and Tekton. The foundation is backed by 20+ founding members which include Alauda, Alibaba, Anchore, Armory.io, Atos, Autodesk, Capital One, CircleCI, CloudBees, DeployHub, GitLab, Google, HSBC, Huawei, IBM, JFrog, Netflix, Puppet, Rancher, Red Hat, SAP, Snyk, and SumoLogic. Why Google joined CDF? Google as a part of this foundation will be working on Spinnaker and Tekton. Originally created by Netflix and jointly led by Netflix and Google, Spinnaker is an open source, multi-cloud delivery platform. It comes with various features for making continuous delivery reliable including support for advanced deployment strategies, an open source canary analysis service named Kayenta, and more. The Spinnaker’s user community has great experience in the continuous delivery domain, and by joining CDF Google aims to share that expertise with the broader community. Tekton is a set of shared, open source components for building CI/CD systems. It allows you to build, test, and deploy applications across multiple environments such as virtual machines, serverless, Kubernetes, or Firebase. In the next few months, we can expect to see support for results and event triggering in Tekton. Google is also planning to work with CI/CD vendors to build an ecosystem of components that will allow users to use Tekton with existing tools like Jenkins X, Kubernetes native, and others. Dan Lorenc, Staff Software Engineer at Google Cloud, sharing Google’s motivation behind joining CDF said, “Continuous Delivery is a critical part of modern software development, but today space is heavily fragmented. The Tekton project addresses this problem by working with the open source community and other leading vendors to collaborate on the modernization of CI/CD infrastructure.” Kim Lewandowski, Product Manager at Google Cloud, said, “The ability to deploy code securely and as fast as possible is top of mind for developers across the industry. Only through best practices and industry-led specifications will developers realize a reliable and portable way to take advantage of continuous delivery solutions. Google is excited to be a founding member of the CDF and to work with the community to foster innovation for deploying software anywhere.” To know more, check out the official announcement at the Google Open Source blog. Google Cloud Console Incident Resolved! Cloudflare takes a step towards transparency by expanding its government warrant canaries Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 16732

article-image-kubernetes-1-10-released
Vijin Boricha
09 Apr 2018
2 min read
Save for later

Kubernetes 1.10 released

Vijin Boricha
09 Apr 2018
2 min read
Kubernetes has announced their first release of 2018: Kubernetes 1.10. This release majorly focuses on stabilizing 3 key areas which include storage, security, and networking. Kubernetes is an open-source system, initially designed by Google and at present is maintained by the Cloud Native Computing Foundation, which helps in automating deployment, scaling, and management of containerized applications. Storage - CSI and Local Storage move to beta: In this version, you will find: The Container Storage Interface (CSI) moves to beta. One can install new volume plugins similar to deploying a pod. This helps third-party storage providers to develop independent solutions outside the core Kubernetes codebase. Local storage management has also progressed to beta, enabling locally attached storage available as a persistent volume source. This assures lower-cost and higher performance for distributed file systems and databases. Security - External credential providers (alpha): Complementing the Cloud Controller Manager feature added in 1.9 Kubernetes has extended its feature with the addition of External credential providers in 1.10. This enables Cloud providers and other platform developers to release binary plugins to handle authentication for specific cloud-provider Identity Access Management services. Networking - CoreDNS as a DNS provider (beta): Kubernetes now provides the ability to switch the DNS service to CoreDNS during installation. CoreDNS is a single process that can now supports more use cases. To get a complete list of additional features of this release visit the Changelog. Check out other related posts: The key differences between Kubernetes and Docker Swarm Apache Spark 2.3 now has native Kubernetes support! OpenShift 3.9 released ahead of planned schedule
Read more
  • 0
  • 0
  • 16631

article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 16313

article-image-pivotal-open-sources-kpack-a-kubernetes-native-image-build-service
Sugandha Lahoti
23 Aug 2019
2 min read
Save for later

Pivotal open sources kpack, a Kubernetes-native image build service

Sugandha Lahoti
23 Aug 2019
2 min read
In April, Pivotal and Heroku teamed up to create Cloud Native Buildpacks for Kubernetes. Cloud-Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. Yesterday, they open-sourced kpack, which is a set of experimental build service Kubernetes resource controllers. Basically, kpack is Kubernetes’ native way to build and update containers. It automates the creation and update of container images that can be run anywhere. Pivotal’s commercial implementation of kpack comes via Pivotal Build Service. Users can use it atop Kubernetes to boost developer productivity. The Build Service integrates kpack with buildpacks and the Kubernetes permissions model. kpack presents a CRD as its interface, and users can interact with all Kubernetes API tooling including kubectl. Pivotal has open-sourced kpack for two reasons, as mentioned in their blog post. “First, to provide Build Service’s container building functionality and declarative logic as a consumable component that can be used by the community in other great products. Second, to provide a first-class interface, to create and modify image resources for those who desire more granular control.” Many companies and communities have announced that they will be using Kpack in their projects. Project riff will use kpack to build functions to handle events. The Cloud Foundry community plans to feature kpack as the new app staging mechanism in the Cloud Foundry Application Runtime. Check out the kpack repo for more details. You can also request alpha access to Build Service. In other news, Pivotal and VMware, the former’s parent company are negotiating a deal for VMware to acquire Pivotal as per a recent regulatory filing from Dell. VMware, Pivotal, and Dell have jointly filed the document informing the government regulators about the potential transaction. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads.
Read more
  • 0
  • 0
  • 16270
article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 16016

article-image-digitalocean-launches-its-kubernetes-as-a-service-at-kubeconcloudnativecon-to-ease-running-containerized-apps
Melisha Dsouza
12 Dec 2018
2 min read
Save for later

DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps

Melisha Dsouza
12 Dec 2018
2 min read
At KubeCon+CloudNativeCon this week, DigitalOcean announced the launch of its Kubernetes-as-a-Service offering to all developers. This is a limited release with full general availability planned for early 2019. DigitalOcean had first announced its container offering through an early access program in May this year followed by its limited availability in October. Building up on the simplicity factor that was appreciated the most by customers, DigitalOcean Kubernetes (DOK8s) claims to be a powerfully simple managed Kubernetes service. Once customers define the size and location of their worker nodes, DigitalOcean will provision, manage, and optimize the services needed to run a Kubernetes cluster. The DOK8s is easy to setup as well. During the announcement, DigitalOcean VP of Product Shiven Ramji said that “Kubernetes promises to be one of the leading technologies in a developer’s arsenal to gain the scalability, portability and availability needed to build modern apps. Unfortunately, for many, it’s extremely complex to manage and deploy. With DigitalOcean Kubernetes, we make running containerized apps consumable for any developer, regardless of their skills or resources.” The new release builds on the early access release of the service including capabilities like node provisioning, handling durable storage, firewall, load balancing and similar tools. Further, the new features added now include: Guided configuration experiences to assist users in provisioning, configuring and deploying clusters Open APIs to enable easy integrations with developer tools Ability to programmatically create and update cluster and nodes settings Expanded version support including Kubernetes version 1.12.1 with support for 1.13.1 shortly. Support released for DOK8s in the DigitalOcean API, making it easy for users to create and manage their clusters through DigitalOcean’s API. Effective pricing for DigitalOcean Kubernetes- Customers pay only for the underlying resources they use (Droplets, Block Storage, and Load Balancers) Head over to DigitalOcean’s blog to know more about this announcement. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes
Read more
  • 0
  • 0
  • 15911

article-image-kubernetes-containerd-1-1-integration-is-now-generally-available
Savia Lobo
25 May 2018
3 min read
Save for later

Kubernetes Containerd 1.1 Integration is now generally available

Savia Lobo
25 May 2018
3 min read
After just 6 months of releasing the alpha version of Kubernetes containerd integration, the community has declared that the upgraded containerd 1.1 is now generally available. Containerd 1.1 can be used as the container runtime for production Kubernetes clusters. It works well with Kubernetes 1.10 and also supports all Kubernetes features. Let’s look at the key upgrades in the new Kubernetes Containerd 1.1 : Architecture upgrade Containerd 1.1 architecture with the CRI plugin In the current version 1.1, the cri-containerd daemon is changed to a containerd CRI plugin. This CRI plugin is made default and is built-in containerd 1.1. It interacts with containerd through direct function calls. Kubernetes can now be used by containerd directly as this new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Thus, the cri-containerd daemon is no longer needed. Performance upgrades Performance optimizations have been the major focus in the Containerd 1.1. Performance was optimized in terms of pod startup latency and daemon resource usage which are discussed in detail below. Pod Startup Latency The containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim. Following graph is based on the results from the ‘105 pod batch startup benchmark’ (The lower, the better) Pod Startup Latency Graph CPU and Memory Usage The containerd 1.1 integration consumes less CPU and memory overall compared to Docker 18.03 CE integration with dockershim at a steady state with 105 pods. The results differ as per the number of pods running on the node. 105 is the current default for the max number of user pods per node. CPU Usage Graph Memory Usage Graph On comparing Docker 18.03 CE integration with dockershim, the containerd 1.1 integration has 30.89% lower kubelet cpu usage, 68.13% lower container runtime cpu usage, 11.30% lower kubelet resident set size (RSS) memory usage,  and 12.78% lower container runtime RSS memory usage. What would happen to Docker Engine? Switching to containerd would not mean that one will be unable to use Docker Engine. The fact is that Docker Engine is built on top of containerd. The next release of Docker Community Edition (Docker CE) will allow using containerd version 1.1. Docker engine built over Containerd Containerd is being used by both Kubelet and Docker Engine. This means users choosing the containerd integration will not only get new Kubernetes features, performance, and stability improvements, but also have the option of keeping Docker Engine around for other use cases. Read more interesting details on the Containerd 1.1 on Kubernetes official blog post. Top 7 DevOps tools in 2018 Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 15672
article-image-neuvector-upgrades-kubernetes-container-security-with-the-release-of-containerd-and-cri-o-run-time-support
Sugandha Lahoti
11 Dec 2018
2 min read
Save for later

NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support

Sugandha Lahoti
11 Dec 2018
2 min read
At the ongoing KubeCon + CloudNativeCon North America 2018, NeuVector has upgraded their line of container network security with the release of containerd and CRI-O run-time support. Attendees of the conference are invited to learn how customers use NeuVector and get 1:1 demos of the platform’s new capabilities. Containerd is a Cloud Native Computing Foundation incubating project. It’s basically a container run-time built to emphasize simplicity, robustness, and portability while managing the complete container lifecycle of its host system. This includes managing the lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and more. NeuVector is testing the containerd version on the latest IBM Cloud Kubernetes Service version, which uses the containerd run-time. CRI-O is an implementation of the Kubernetes container run-time interface enabling OCI compatible run-times. It is a lightweight alternative to Docker as a run-time for Kubernetes. CRI-O is made up of several components including: OCI compatible runtime containers/storage containers/image networking (CNI) container monitoring (common) security is provided by several core Linux capabilities With this newly added support, organizations using containerd or CRI-O can deploy NeuVector to secure their container environments. Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes. Kubernetes 1.13 released with new features and fixes to a major security flaw Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 15609

article-image-google-and-waze-share-their-best-practices-for-canary-deployment-using-spinnaker
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Google and Waze share their best practices for canary deployment using Spinnaker

Bhagyashree R
18 Jan 2019
3 min read
On Monday, Eran Davidovich, a System Operations Engineer at Waze and Théo Chamley, Solutions Architect at Google Cloud shared their experience on using Spinnaker for canary deployments. Waze estimated that canary deployment helped them prevent a quarter of all incidents on their services. What is Spinnaker? Developed at Netflix, Spinnaker, is an open source, multi-cloud continuous delivery platform that helps developers to manage app deployments on different computing platforms including Google App Engine, Google Kubernetes Engine, AWS, Azure, and more. This platform also enables you to implement advanced deployment methods like canary deployment. In this type of deployment, developers roll out the changes to a subset of users to analyze whether or not the code release provides the desired outcome. If this new code poses any risks, you can mitigate it before releasing the update to all users. In April 2018, Google and Netflix introduced a new feature for Spinnaker called Kayenta using which you can create an automated canary analysis for your project. Though you can build your own canary deployment or other advanced deployment patterns, Spinnaker and Kayenta together are aimed at making it much easier and reliable. The tasks that Kayenta automates includes fetching user-configured metrics from their sources, running statistical tests, and providing an aggregating score for the canary. On the basis of the aggregated score and set limits for success, Kayenta automatically promotes or fails the canary, or triggers a human approval path. Canary best practices Check out the following best practices to ensure that your canary analyses are reliable and relevant: Instead of comparing the canary against the production, compare it against a baseline. This is because many differences can skew the results of the analysis such as cache warmup time, heap size, load-balancing algorithms, and so on. The canary should be run for enough time, at least 50 pieces of time-series data per metric, to ensure that the statistical analysis is relevant. Choose metrics that represent different aspects of your applications’ health. Three aspects are very critical as per the SRE book, which includes latency, errors, and saturation. You must put a standard set of reusable canary configs in place. This will come in handy for anyone in your team as a starting point and will also keep the canary configurations maintainable. Thunderbird welcomes the new year with better UI, Gmail support and more Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! AIOps – Trick or Treat?
Read more
  • 0
  • 0
  • 15487
Modal Close icon
Modal Close icon