Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-russian-censorship-board-threatens-to-block-search-giant-yandex-due-to-pirated-content
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Russian censorship board threatens to block search giant Yandex due to pirated content

Sugandha Lahoti
30 Aug 2018
3 min read
Update, 31st August 2018: Yandex has refused to remove pirated content. According to a statement from the company, Yandex believes that the law is being misinterpreted. While pirate content must be removed from sites hosting it, the removal of links to such content on search engines falls outside the scope of the current legislation.  “In accordance with the Federal Law On Information, Information Technologies, and Information Protection, the mechanics are as follows: pirated content should be blocked by site owners and on the so-called mirrors of these sites,” Yandex says. A Yandex spokesperson said that the company works in “full compliance” with the law. “We will work with market participants to find a solution within the existing legal framework.” Check out more info on Interfax. Roskomnadzor has found Russian search giant Yandex guilty of holding pirated content. The Federal Service for Supervision of Communications, Information Technology and Mass Media or Roskomnadzor is the Russian federal executive body responsible for censorship in media and telecommunications. The Moscow City Court found the website guilty of including links to pirated content last week. The search giant was asked to remove those links and the mandate was further reiterated by Roskomnadzor this week. Per the authorities, if Yandex does not take action within today, its video platform will be blocked by the country's ISPs. Last week, major Russian broadcasters Gazprom-Media, National Media Group (NMG), and others had protested against pirated content by removing their TV channels from Yandex’s ‘TV Online’ service. They said that they would allow their content to appear again only if Yandex removes pirated content completely. Following this, Gazprom-Media had filed a copyright infringement complaint with the Moscow City Court. Subsequently, the Moscow Court made a decision compelling Yandex to remove links to pirated TV shows belonging to Gazprom-Media. Pirate content has been a long-standing challenge for the telecom sector that is yet to be completely eradicated. Not only does it lead to a loss in revenues, but also a person watching illegal movies violates copyright and intellectual property laws. The Yandex website is heavily populated with pirated content, especially TV shows and movies. Source: Yandex.video In a statement to Interfax, Deputy Head of Roskomnadzor Vadim Subbotin warned that Yandex.video will be blocked Thursday night (August 30) if the pirate links aren’t removed. “If the company does not take measures, then according to the law, the Yandex.Video service must be blocked. There’s nowhere to go,” Subbotin said. The search giant has not yet responded to this accusation. You can check out the detailed coverage of the news on Interfax. Adblocking and the Future of the Web. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran. YouTube has a $25 million plan to counter fake news and misinformation.
Read more
  • 0
  • 0
  • 15246

article-image-introducing-openstack-foundations-kata-containers-1-0
Savia Lobo
24 May 2018
2 min read
Save for later

Introducing OpenStack Foundation’s Kata Containers 1.0

Savia Lobo
24 May 2018
2 min read
OpenStack Foundation successfully launched the version 1.0 of its first non-OpenStack project, Kata Containers. Kata containers is a result of the combination of two leading open source virtualized container projects, Intel’s Clear Containers and Hyper’s runV technology. Kata Containers enable developers to have a, lighter, faster, and an agile container management technology across stacks and platforms. Developers can have a more container-like experience with security and isolation features. Kata Containers deliver an OCLI compatible runtime with seamless integration for Docker and Kubernetes. They execute a lightweight VM for every container such that every container gets similar hardware isolation as expected from a virtual machine. Although, hosted by OpenStack foundation, Kata Containers are assumed to be platform and architecture agnostic. Kata Containers 1.0 components include: Kata Containers runtime 1.0.0 (in the /runtime repo) Kata Containers proxy 1.0.0 (in the /proxy repo) Kata Containers shim 1.0.0 (in the /shim repo) Kata Containers agent 1.0.0 (in the /agent repo) KSM throttler 1.0.0 (in the /ksm-throttler repo) Guest operating system building scripts (in the /osbuilder repo) Intel, RedHat, Canonical and cloud vendors such as Google, Huawei, NetApp, and others have offered to financially support the Kata Containers Project. Read more about Kata containers on their official website and on the GitHub Repo. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform What to expect from vSphere 6.7 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available
Read more
  • 0
  • 0
  • 15165

article-image-vmware-signs-definitive-agreement-to-acquire-pivotal-software-and-carbon-black
Vincy Davis
23 Aug 2019
3 min read
Save for later

VMware signs definitive agreement to acquire Pivotal Software and Carbon Black

Vincy Davis
23 Aug 2019
3 min read
Yesterday, VMware announced in a press release that they entered a conclusive agreement to acquire Carbon Black, a cloud-native endpoint security software developer. According to the agreement, “VMware will acquire Carbon Black in an all cash transaction for $26 per share, representing an enterprise value of $2.1 billion.”  VMware intends to use Carbon Black’s big data and behavioral analytics to offer customers advanced threat detection and behavioral insight to defend against experienced attacks. Consequently, they aspire to protect clients through big data, behavioral analytics, and AI. Pat Gelsinger, the CEO of VMware says, “By bringing Carbon Black into the VMware family, we are now taking a huge step forward in security and delivering an enterprise-grade platform to administer and protect workloads, applications, and networks.” He adds, “With this acquisition, we will also take a significant leadership position in security for the new age of modern applications delivered from any cloud to any device.” Yesterday, after much speculation, VMware also announced that they have acquired Pivotal Software, a cloud-native platform provider, for an enterprise value of $2.7 billion. Dell technologies is a major stakeholder in both companies. Lately, VMware has been heavily investing in Kubernetes. Last year, it also launched a VMware Kubernetes Engine (VKE) to offer Kubernetes-as-a-Service. This year, Pivotal also teamed up with the Heroku team to create Cloud Native Buildpacks for Kubernetes and recently, also launched a Pivotal Spring Runtime for Kubernetes. With Pivotal, VMware plans to “deliver a comprehensive portfolio of products, tools and services necessary to build, run and manage modern applications on Kubernetes infrastructure with velocity and efficiency.” Read More: VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Gelsinger told ZDNet that both these “acquisitions address two critical technology priorities of all businesses today — building modern, enterprise-grade applications and protecting enterprise workloads and clients.” Gelsinger also pointed out that multi-cloud, digital transformation, and the increasing trend of moving “applications to the cloud and access it over distributed networks and from a diversity of endpoints” are significant reasons for placing high stakes on security. It is clear that by acquiring Carbon Black and Pivotal Software, the cloud computing and virtualization software company is seeking to expand its range of products and services with an ultimate focus on security in Kubernetes. A user on Hacker News comments, “I'm not surprised at the Pivotal acquisition. VMware is determined to succeed at Kubernetes. There is already a lot of integration with Pivotal's Kubernetes distribution both at a technical as well as a business level.” Also, developers around the world are excited to see what the future holds for VMware, Carbon Black, and Pivotal Software. https://twitter.com/rkagal1/status/1164852719594680321 https://twitter.com/CyberFavourite/status/1164656928913596417 https://twitter.com/arashg_/status/1164785525120618498 https://twitter.com/jambay/status/1164683358128857088 https://twitter.com/AnnoyedMerican/status/1164646153389875200 Per the press release, both the transaction payments are expected to be concluded in the second half of VMware’s fiscal year i.e., January 31, 2020. Interested users can read the VMware acquiring Carbon Black and Pivotal Software press releases for more information. VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 15047

article-image-aws-service-operator-for-kubernetes-now-available-allowing-the-creation-of-aws-resources-using-kubectl
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl

Melisha Dsouza
08 Oct 2018
3 min read
On the 5th of October, the Amazon team announced the general availability of ‘The AWS Service Operator’. This is an open source project in an alpha state which allows users to manage their AWS resources directly from Kubernetes using the standard Kubernetes CLI, kubectl. What is an Operator? Kubernetes is built on top of a 'controller pattern'. This allows applications and tools to listen to a central state manager (etcd), and take action when something happens. The controller pattern allows users to create decoupled experiences without having to worry about how other components are integrated. An operator is a purpose-built application that manages a specific type of component using this same pattern. You can check the entire list of operators at Awesome Operators. All about the AWS Service Operator Generally, users that need to integrate Amazon DynamoDB with an application running in Kubernetes or deploy an S3 Bucket for their application to use, would need to use tools such as AWS CloudFormation or Hashicorp Terraform. They then have to create a way to deploy those resources. This requires the user to behave as an operator to manage and maintain the entire service lifecycle. Users can now skip all of the above steps and deploy Kubernetes’ built-in control loop. This stores a desired state within the API server for both the Kubernetes components and the AWS services needed. The AWS Service Operator models the AWS Services as Custom Resource Definitions (CRDs) in Kubernetes and applies those definitions to a user’s cluster. A developer can model their entire application architecture from the container to ingress to AWS services, backing it from a single YAML manifest. This will reduce the time it takes to create new applications, and assist in keeping applications in the desired state. The AWS Service Operator exposes a way to manage DynamoDB Tables, S3 Buckets, Amazon Elastic Container Registry (Amazon ECR) Repositories, SNS Topics, SQS Queues, and SNS Subscriptions, with many more integrations coming soon. Looks like users are pretty excited about this update! Source: Hacker News You can learn more about this announcement on the AWS Service Operator project on GitHub. Head over to the official blog to explore how to use AWS Service Operator to create a DynamoDB table and deploy an application that uses the table after it has been created. Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS  
Read more
  • 0
  • 0
  • 15027

article-image-stripe-open-sources-skycfg-a-configuration-builder-for-kubernetes
Melisha Dsouza
05 Dec 2018
2 min read
Save for later

Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes

Melisha Dsouza
05 Dec 2018
2 min read
On 3rd December, Stripe announced the open-sourcing of Skycfg which is a configuration builder for Kubernetes. Skycfg was developed by Stripe as an extension library for the Starlark language. It adds support for constructing Protocol Buffer messages. The team states that as the implementation of Skycfg stabilizes, the public API surface will be expanded so that Skycfg can be combined with other Starlark extensions. Benefits of Skycfg Skycfg ensures Type safety. It uses ‘Protobuf’  which has a statically-typed data model, and the type of every field is known to Skycfg when it's building a configuration. Users are free from the risk of accidentally assigning a string to a number, a struct to a different struct, or forgetting to quote a YAML value. Users can reduce duplicated typing and share logic by defining helper functions. Starlark supports importing modules from other files. This can be used to share common code between configurations. These modules can protect service owners from complex Kubernetes logic. Skycfg supports limited dynamic behavior through the use of context variables, which let the Go caller pass arbitrary key:value pairs in the ctx parameter. Skycfg simplifies the configuration of Kubernetes services, Envoy routes, Terraform resources, and other complex configuration data. Here is what users are saying about Skycfg over at HackerNews: Head over to GitHub for all the code and supporting files. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 14994

article-image-jenkins-x-the-new-cloud-native-ci-cd-solution-on-kubernetes
Savia Lobo
24 Apr 2018
3 min read
Save for later

Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes

Savia Lobo
24 Apr 2018
3 min read
Jenkins is loved by many as the open source automation server, that provides different plugins  to support building, deploying, and automating any project. However, Jenkins is not a cloud-native tool, i.e., it lacks the OOTB (Out-Of-The-Box) capabilities to survive an outage, and scale seamlessly, among many other flaws. In order to make Jenkins cloud native, the team has come up with a brand new Jenkins X platform, an open source CI/CD solution for modern cloud applications, which would be deployed on Kubernetes. Jenkins X is currently a sub-project within the Jenkins Foundation. It fully focuses on Kubernetes, CI/CD and Cloud Native use cases for providing great developer productivity. With the Kubernetes plugin, one does not have to worry about provisioning VMs or physical servers for slaves. The target audience for Jenkins X include both the existing as well as new Jenkins users. It is specifically designed for those who are, Already using Kubernetes and want to adopt CI/CD, or Want to adopt CI/CD and want to increasingly move to the public cloud, even if they don’t know anything about Kubernetes. Key Features of Jenkins X An automated Continuous Integration(CI) and Continuous Delivery(CD) tool: Jenkins X does not require one to have a deep knowledge of the internals of a Jenkins pipeline. It provides a default setting and the best-fit pipelines for one’s projects, which would implement CI and CD fully. Automated management of the Environments: Jenkins X automates the management of the environments and the promotion of new versions of applications between environments, which each team gets, via GitOps. Automated Preview Environments: Jenkins X provides preview environments automatically for one’s pull requests. With this, one can get a faster feedback before changes are merged to master. Feedback on Issues and Pull Requests: Jenkins X automatically comments on Commits, Issues and Pull Requests with feedback when, Code is ready to be previewed, Code is promoted to environments, or If Pull Requests are generated automatically to upgrade versions. Some other notable features of Jenkins X are : Jenkins X uses a distribution of Jenkins as the core CI / CD engine. It also promotes a particular Git branching and repository model and includes tools and services, present within the distribution, to fit this model. The Jenkins X development model represents "best practice of developing Kubernetes applications", which is based in part on the experience of developing Fabric8, a project with a similar mission and on the results of the State of DevOps report. The advantage of Jenkins X is that if one follows the best practices, Jenkins X assembles all the pieces by itself, for instance, Jenkins, Kubernetes, Git, CI/CD etc. such that developers can be instantly productive. Jenkins X is shipped with K8s pipelines, agents, and integrations. This makes migrations to Kubernetes and microservices way simpler. jx: Jenkins X CLI tool Jenkins X also defines a command line tool, jx. This tool encapsulates tasks as high-level operations. Its CLI is used not only by developers from their computers, but also used by Jenkins Pipeline. It is a central user interface which allows: Easy installation of Jenkins X on any kubernetes cluster Create new Kubernetes clusters from scratch on the public cloud Set up Environments for each Team Import existing projects or create new Spring Boot applications and later: automatically set up the CI / CD pipeline and webhooks create new releases and promote them through the Environments on merge to master support Preview Environments on Pull Requests Read further more on Jenkins X on its official website.    
Read more
  • 0
  • 0
  • 14906
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-jfrog-acquires-devops-startup-shippable-for-an-end-to-end-devops-solution
Melisha Dsouza
22 Feb 2019
2 min read
Save for later

JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution

Melisha Dsouza
22 Feb 2019
2 min read
JFrog, a leading company in DevOps has acquired Shippable- a cloud-based startup that focuses on Kubernetes-ready continuous integration and delivery (CI/CD), helping developers to ship code and deliver app and microservices updates. This strategic acquisition- JFrog’s fifth-  aims at providing customers with a “complete, integrated DevOps pipeline solution”. The collaboration between JFrog and Shippable will allow users to automate their development processes right from the time the code is committed all the way to production. Shlomi Ben Haim, Co-founder, and CEO of JFrog, says in the official press release that “The modern DevOps landscape requires ever-faster delivery with more and more automation. Shippable’s outstanding hybrid and cloud native technologies will incorporate yet another best-of-breed solution into the JFrog platform. Coupled with our commitments to universality and freedom of choice, developers can expect a superior out-of-the-box DevOps platform with the greatest flexibility to meet their DevOps needs.” According to an email sent to Packt Hub, JFrog, will now allow developers to have a completely integrated DevOps pipeline with JFrog, while still retaining the full freedom to choose their own solutions in JFrog’s universal DevOps model. The plan is to release the first technology integrations with JFrog Enterprise+ this coming summer, and a full integration by Q3 of this year. According to JFrog, this acquisition will result in a more automated, complete, open and secure DevOps solution in the market. This is just another victory for JFrog. JFrog has previously announced a $165 million Series D funding. Last year, the company also launched JFrog Xray, a binary analysis tool that performs recursive security scans and dependency analyses on all standard software package and container types. Avi Cavale, founder and CEO of Shippable, says that Shippable users and customers will now “have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform”, and that the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality. Spotify acquires Gimlet and Anchor to expand its podcast services Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Adobe Acquires Allegorithmic, a popular 3D editing and authoring company
Read more
  • 0
  • 0
  • 14786

article-image-atlassian-open-sources-escalator-a-kubernetes-autoscaler-project
Savia Lobo
07 Jun 2018
2 min read
Save for later

Atlassian open sources Escalator, a Kubernetes autoscaler project

Savia Lobo
07 Jun 2018
2 min read
Atlassian recently announced the release of their open source Kubernetes autoscaler project, Escalator. This project aims at resolving issues related with autoscaling where clusters were not fast enough in scaling up or down. Atlassian explained the problem with scaling up, which was when clusters hit capacity, users would have to wait for a long time for the additional Kubernetes workers to be booted up in order to assist with the additional load. Many builds cannot tolerate extended delays and would fail. On the other hand, the issue while scaling down was that when loads had subsided, the autoscaler would not scale-down fast enough. Though this is not really an issue when the node count is less, however a problem can arise when that number reaches hundreds and more. Escalator, written in Go, is the solution To address the problem with the scalability of the clusters, Atlassian created Escalator, which is a batch of job optimized autoscaler for Kubernetes. Escalator basically had two goals : Provide preemptive scale-up with a buffer capacity feature to prevent users from experiencing the 'cluster full' situation, Support aggressive scale-down of machines when they were no longer required. Atlassian also wanted to build a Prometheus metrics for the Ops team, to gauge how well the clusters were working. With Escalator, one need not wait for EC2 instances to boot and join the cluster. It also helps in saving money by allowing one to pay for the number of machines actually needed. It has also helped Atlassian save a lot of money, nearly thousands of dollars per day, based on the workloads they run. At present, Escalator is released as open source to the Kubernetes community. However, others can avail its features too. The company would be expanding the tool to its external Bitbucket Pipeline users, and would also explore ways to manage more service-based workloads. Read more about Escalator on the Atlassian blog. You can also check out its GitHub Repo. The key differences between Kubernetes and Docker Swarm Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Kubernetes Containerd 1.1 Integration is now generally available
Read more
  • 0
  • 0
  • 14766

article-image-rackspace-now-supports-kubernetes-as-a-service
Vijin Boricha
18 May 2018
2 min read
Save for later

Rackspace now supports Kubernetes-as-a-Service

Vijin Boricha
18 May 2018
2 min read
Rackspace recently announced the launch of its Kubernetes-as-a-Service offering which would be  implemented to its private cloud clients worldwide, this month. It claims this service would be soon coming to public cloud later this year. Rackspace, which is a managed-cloud computing company, revealed that it will fully operate and manage the Kubernetes deployment, including the infrastructure. It also claimed that users can save up to 50% when compared to other open source system deployments. So, if you are looking at automating deployments, scaling, and managing containerized applications then, Kubernetes is your open-source option. It is the most efficient way of running online software across a vast range of machines. Kubernetes is becoming a leading player in cloud container orchestration, where bigger players like Microsoft Azure and Cisco have started adopting its services. Not all businesses comply with the internal resources and expertise needed to effectively manage a Kubernetes environment on their own. By delivering a fully managed Kubernetes-as-a-Service, Rackspace allows organizations to focus more on building and running their applications. With the new service, Rackspace delivers an enhanced level of ongoing operations management and support for the entire technology stack. This support ranges from the hardware to the Infrastructure as a Service (IaaS) to Kubernetes. Rackspace also claims that the key benefits of this offering include Support for operations such as Updates, Upgrades, Patching and security hardening, and The ability to use a single platform to deploy Kubernetes clusters across private and public clouds. Ensures that a customer always has access to an entire team of specialists 24*7*365 Rackspace experts fully validate and inspect each component of the service, provide static container scanning and enable customers to restrict user access to the environment. This is just an overview of Rackspace’s  extended support to Kubernetes-as-a-Service. You can know more about this new offering from the Rackspace blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 How to secure a private cloud using IAM Google’s kaniko – An open-source build tool for Docker Images in Kubernetes, without a root access
Read more
  • 0
  • 0
  • 14496

article-image-kubernetes-1-12-is-releasing-next-week-with-updates-to-its-storage-security-and-much-more
Melisha Dsouza
21 Sep 2018
4 min read
Save for later

Kubernetes 1.12 is releasing next week with updates to its storage, security and much more!

Melisha Dsouza
21 Sep 2018
4 min read
Kubernetes 1.12 will be released on Tuesday, the 25th of September 2018. This updated release comes with improvements to security and storage, cloud provider support and other internal changes. Let’s take a look at the four domains that will be majorly impacted by this update. #1 Security Stability provided for Kubelet TLS bootstrap The Kubelet TLS bootstrap will now have a stable version. This was also covered in the blog post Kubernetes Security: RBAC and TLS. The kubelet can generate a private key and a signing request (CSR) to get the corresponding certificate. Kubelet server TLS certificate automatic rotation (Beta) The kubelets are able to rotate both client and/or server certificates. They can be automatically rotated through the respective RotateKubeletClientCertificate and RotateKubeletServerCertificate feature flags in the kubelet that are enabled by default now. Egress and IPBlock support for Network Policy NetworkPolicy objects support an egress or to section to allow or deny traffic based on IP ranges or Kubernetes metadata. NetworkPolicy objects also support CIDR IP blocks to be configured in the rule definitions. Users can combine Kubernetes-specific selectors with IP-based ones both for ingress and egress policies. Encryption at rest Data encryption at rest can be obtained using Google Key Management Service as an encryption provider. Read more about this on KMS providers for data encryption. #2 Storage Snapshot / restore volume support for Kubernetes VolumeSnapshotContent and VolumeSnapshot API resources can be provided to create volume snapshots for users and administrators. Topology aware dynamic provisioning, Kubernetes CSI topology support (Beta) Topology aware dynamic provisioning will allow a Pod to request one or more Persistent Volumes (PV) with topology that are compatible with the Pod’s other scheduling constraints- such as resource requirements and affinity/anti-affinity policies. While using multi-zone clusters, pods can be spread across zones in a specific region. The volume binding mode handles the instant at which the volume binding and dynamic provisioning should happen. Automatic detection of Node type When the dynamic volume limits feature is enabled in Kubernetes, it automatically determines the node type. Kubernetes supports the appropriate number of attachable volumes for the node and vendor. #3 Support for Cloud providers Support for Azure Availability Zones Kubernetes 1.12 brings support for Azure availability zones. Nodes within each availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> and Azure managed disks storage class will be provisioned taking this into account. Stable support for Azure Virtual Machine Scale Sets This feature adds support for Azure Virtual Machine Scale Sets. This technology lets users create and manage a group of identical load balanced virtual machines. Add Azure support to cluster-autoscaler (Stable) This feature adds support for Azure Cluster Autoscaler. The cluster autoscaler allows clusters to grow as resource demands increase. The Cluster Autoscaler does this scaling  based on pending pods. #4 Better support for Kubernetes internals Easier installation and upgrades through ComponentConfig In earlier Kubernetes versions, modifying the base configuration of the core cluster components was not easily automatable. ComponentConfig is an ongoing effort to make components configuration more dynamic and directly reachable through the Kubernetes API. Improved multi-platform compatibility Kubernetes aims to support the multiple architectures, including arm, arm64, ppc64le, s390x and Windows platforms. Automated CI e2e conformance tests have been deployed to ensure compatibility moving forward. Quota by priority scopeSelector can be used to create Pods at a specific priority. Users can also control a pod’s consumption of system resources based on a pod’s priority. Apart from these four major areas that will be upgraded in Kubernetes 1.12, additional features to look out for are Arbitrary / Custom Metrics in the Horizontal Pod Autoscaler, Pod Vertical Scaling, Mount namespace propagation, and much more! To know about all the upgrades in Kubernetes 1.12, head over to Sysdig’s Blog Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.11 is here! VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service  
Read more
  • 0
  • 0
  • 14490
article-image-kublr-1-9-2-for-kubernetes-cluster-deployment-in-isolated-environments-released
Savia Lobo
30 May 2018
2 min read
Save for later

Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released!

Savia Lobo
30 May 2018
2 min read
Kublr, a comprehensive Kubernetes platform for the enterprise, announced the release of Kublr 1.9.2 at the DevOpsCon, Berlin. Kublr provides a Kubernetes platform which makes it easy for Operations to deploy, run, and handle containerized applications. At the same time, it allows developers to use the development tools and the environment they wish to choose. Kublr 1.9.2 allows developers to deploy the complete Kublr platform and Kubernetes clusters in isolated environments without requiring access to the Internet. This comes as an advantage for organizations that have sensitive data, which should remain secure. However, while being secured and isolated this data also benefits from features such as auto-scaling, backup and disaster recovery, centralized monitoring and log collection. Slava Koltovich, CEO of Kublr, stated that,”We’ve learned from several financial institutions that there is a vital need for cloud-like capabilities in completely isolated environments. It became increasingly clear that, to be truly enterprise grade, Kublr needed to work in even the most secure environments. We are proud to now offer that capability out-of-the-box”. The Kublr 1.9.2 changelog includes the following key updates: Ability to deploy Kublr without access to Internet Support Docker EE for RHEL Support CentOS 7.4. Delete onprem clusters. Additional kubelet monitoring. The Changelog also includes some bug fixes of some known issues. Kublr further announced that it is now Certified Kubernetes for Kubernetes v1.10. To know more about Kublr 1.9.2 in detail, check the release notes. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Kubernetes Containerd 1.1 Integration is now generally available Introducing OpenStack Foundation’s Kata Containers 1.0  
Read more
  • 0
  • 0
  • 14484

article-image-puppet-launches-puppet-remediate-a-vulnerability-remediation-solution-for-it-ops
Vincy Davis
22 Aug 2019
3 min read
Save for later

Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops

Vincy Davis
22 Aug 2019
3 min read
Yesterday, Puppet announced a vulnerability remediation solution called Puppet Remediate which aims to reduce the time taken by IT teams to identify, prioritize and rectify mission-critical vulnerabilities. Matt Waxman, head of product at Puppet said, “There is a major gap between sophisticated scanning tools that identify vulnerabilities and the fragmented and manual, error-prone approach of fixing these vulnerabilities.” He adds, “Puppet Remediate closes this gap giving IT the insight they need to end the current soul-crushing work associated with vulnerability remediation to ensure they are keeping their organization safe.” Puppet Remediate will produce faster remedial solution by taking support from security partners who have access to potentially sensitive vulnerability data. It will discover vulnerabilities depending on the type of infrastructure resources affected by them. Next, Puppet Remediate will render instant action “to remediate vulnerable packages without requiring any agent technology on the vulnerable systems on both Linux and Windows through SSH and WinRM”, says Puppet. Key features in Puppet Remediate Shared vulnerability data between security and IT Ops Puppet Remediate unifies infrastructure data and vulnerability data, to help IT Ops get access to vulnerability data in real-time, thus reducing delays and eliminating risks associated to manual handover of data. Risk-based prioritization It will assist IT teams to prioritize critical systems and identify vulnerabilities within the organization's systems based on infrastructure context. It will give IT teams more clarity on what to fix first. Agentless remediation IT teams will be able to take immediate action to rectify a vulnerability without requiring to leave the application or without the need of requiring any agent technology on the vulnerable systems. Channel partners will provide Puppet an established infrastructure and InfoSec practices Puppet have selected initial channel partners depending on their established infrastructure and InfoSec practices. The channel partners will help Puppet Remediate to bridge the gap between security and IT practices in enterprises. Fishtech, a cybersecurity solutions provider and Bitbone, a Germany based computer software store are the initial channel partners for Puppet Remediate. Sebastian Scheuring, CEO of Bitbone AG says, “Puppet Remediate offers real added value with its new functions to our customers. It drastically automates the workflow of vulnerability remediation through taking out the manual, mundane and error-prone steps that are required to remediate vulnerabilities. Continuous scans, remediation tasks and short cycles of update processes significantly increase the security level of IT environments.” Check out the website to know more about Puppet Remediate. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 14460

article-image-the-continuous-intelligence-report-by-sumo-logic-highlights-the-rise-of-multi-cloud-adoption-and-open-source-technologies-like-kubernetes
Vincy Davis
11 Sep 2019
4 min read
Save for later

The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes

Vincy Davis
11 Sep 2019
4 min read
Today, Sumo Logic revealed the fourth edition of their “Continuous Intelligence Report: The State of Modern Applications and DevSecOps in the Cloud.” The primary goal of this report is to present data-driven insights, best practices and the latest trends by analyzing technology adoption among Sumo Logic customers. The data in the report is derived from 2000+ Sumo Logic customers running applications on cloud platforms like AWS, Azure, Google Cloud Platform, as well as, on-premise environments. This year, the Continuous Intelligence report finds that, with an increase of 50% in enterprise adoption and deployments of multi-cloud, Multi-cloud is growing faster than any other modern infrastructure category. In a statement, Kalyan Ramanathan, vice president of product marketing for Sumo Logic says, “the increased adoption of services to enable and secure a multi-cloud strategy are adding more complexity and noise,  which current legacy analytics solutions can’t handle. To address this complexity, companies will need a continuous intelligence strategy that consolidates all of their data into a single pane of glass to close the intelligence gap. Sumo Logic provides this strategy as a cloud-native, continuous intelligence platform, delivered as a service.” Key findings of the Modern App Report 2019 Kubernetes highly prevalent in multi-cloud environments Kubernetes offers broad multi-cloud support and can be used by many organizations to run applications across cloud environments. The 2019 Modern App survey reveals that 1 in 5 AWS customers use Kubernetes. Image Source: The Continuous Intelligence Report The report states, “Enterprises are betting on Kubernetes to drive their multi-cloud strategies. It is imperative that enterprises deploy apps on Kubernetes to easily orchestrate/manage/scale apps and also retain the flexibility to port apps across different clouds.” Open source has disrupted the modern application stack Open source has disrupted the modern application stack with open source solutions for containers like orchestration, infrastructure and application services leading in majority. 4 out of 6 application infrastructure platforms are dominated by open source now. One of the open source solution called the orchestration technologies are used to not only automate the deployment and scaling of containers, but also to ensure reliability of applications and workloads which are running on containers. Image Source: The Continuous Intelligence Report Adoption of individual IaaS services suggests enterprises are trying to avoid vendor lock-in The Modern App 2019 survey finds that typical enterprises are only using 15 out of 150+ discrete services marketed and available for consumption in AWS. The adoption of AWS services demonstrates that basic compute, storage, database, network, and identity services are some of the top 10 adopted services in AWS. It is also found that services like management, tooling, and advanced security services are adopted at a lower rate than the core infrastructure services (50% or less). Image Source: The Continuous Intelligence Report Serverless technology mainly AWS Lambda continue to rise Serverless technologies like AWS Lambda continues to grow steeply as it is a cost-effective option to speed cloud and DevOps deployment automation. The Modern App Report 2019 reveals that AWS Lambda adoption grew to 36% in 2019, up 24% from 2017. It is also being used in several non-production use cases. AWS Lambda continues to increase their cloud migration and digital transformation efforts which makes it one of the top 10 AWS services by adoption. “Lambda usage for application or deployment automation technology should be considered for every production application,” reads the report. Image Source: The Continuous Intelligence Report The 2019 Continuous Intelligence Report is the first industry report to quantitatively define the state of the Modern Application Stack and its implication to the growing technology. Professionals like cloud architects, Site Reliability Engineers (SREs), data engineers, operations teams, DevOps and Chief Information Security Officers (CISOs) can learn how to build, run and secure modern applications and cloud infrastructures by leveraging information from this report. If you are interested to know more, you can check out the full report at the Sumo Logic blog. Other news in Cloud and Networking Containous introduces Maesh, a lightweight and simple Service Mesh to ease microservices adoption Amazon announces improved VPC networking for AWS Lambda functions Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more
Read more
  • 0
  • 0
  • 14422
article-image-macstadium-announces-orka-orchestration-with-kubernetes-on-apple
Savia Lobo
13 Aug 2019
2 min read
Save for later

MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple)

Savia Lobo
13 Aug 2019
2 min read
Today, MacStadium, an enterprise-class cloud solution for Apple Mac infrastructure, announced ‘Orka’ (Orchestration with Kubernetes on Apple). Orka is a new virtualization layer for Mac build infrastructure based on Docker and Kubernetes technology. It offers a solution for orchestrating macOS in a cloud environment using Kubernetes on genuine Apple Mac hardware. With Orka, users can apply native Kubernetes commands for macOS virtual machines (VMs) on genuine Apple hardware. “While Kubernetes and Docker are not new to full-stack developers, a solution like this has not existed in the Apple ecosystem before,” MacStadium wrote in an email statement to us. “The reality is that most enterprises need to develop applications for Apple platforms, but these enterprises prefer to use nimble, software-defined build environments,” said Greg McGraw, Chief Executive Officer, MacStadium. “With Orka, MacStadium’s flagship orchestration platform, developers and DevOps teams now have access to a software-defined Mac cloud experience that treats infrastructure-as-code, similar to what they are accustomed to using everywhere else.” Developers creating apps for Mac or iOS must build on genuine Apple hardware. However, until now, popular orchestration and container technologies like Kubernetes and Docker have been unable to leverage Mac operating systems. With Orka, Apple OS development teams can use container technology features in a Mac cloud, the same way they build on other cloud platforms like AWS, Azure or GCP. As part of its initial release, Orka will ship with a plugin for Jenkins, an open-source automation tool that enables developers to build, test and deploy their software using continuous integration techniques. Macstadium will also present a session at DevOps World | Jenkins World in San Francisco (August 12-15) demonstrating users how Orka integrates with Jenkins build pipelines and how it leverages the capability and power of Docker/Kubernetes in a Mac development environment. To know more about Orka in detail, visit MacStadium’s official website. CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]
Read more
  • 0
  • 0
  • 14328

article-image-codefreshs-fixvember-a-devops-hackathon-to-encourage-developers-to-contribute-to-open-source
Sugandha Lahoti
30 Oct 2018
2 min read
Save for later

Codefresh’s Fixvember, a Devops hackathon to encourage developers to contribute to open source

Sugandha Lahoti
30 Oct 2018
2 min read
Open Source is getting a lot of attention these days and to incentivize people to contribute to open source Codefresh has launched "Fixvember", a do-it-from-home, DevOps hackathon. Codefresh is a Kubernetes native CI/CD which allows for creating powerful pipelines based on DinD as a service and provides self-service test environments, release management, and Docker and Helm registry. Codefresh’s Fixvember is a Devops based hackathon where Codefresh will provide DevOps professionals with a limited-edition t-shirt to contribute to open source. The event basically encourages developers (and not just Codefresh users) to make at least three contributions to open source projects, including building automation, adding better testing, and fixing bugs. The focus is on making engineers more successful by following DevOps best practices. Adding a Codefresh YAML to an open-source repo may also earn developers additional prizes or recognition. Codefresh debuts Fixvember in sync with the launch of its public-facing builds in the Codefresh platform. Codefresh is offering 120 builds/month, private Docker Registry, Helm Repository, and Kubernetes/Helm Release management for free to increase the adoption of CI/CD processes. It is also offering a huge free tier within Codefresh with everything needed to help teams. Developers can participate by following these steps. Step 1: Signup at codefresh.io/fixvember Step 2: Make 3 open source contributions that improve DevOps. This could be adding/updating a Codefresh pipeline to a repo, adding tests or validation to a repo, or just fixing bugs. Step 3: Submit your results using your special email link “I can’t promise the limited-edition t-shirt will increase in value, but if it does, I bet it will be worth $1,000 by next year. The FDA prevents me from promising any health benefits, but it’s possible this t-shirt will actually make you smarter,” joked Dan Garfield, Chief Technology Evangelist for Codefresh. “Software engineers sometimes have a hero complex that adding cool new features is the most valuable thing. But, being ‘Super Fresh’ means you do the dirty work that makes new features deploy successfully. Adding automated pipelines, writing tests, or even fixing bugs are the lifeblood of these projects.” Read more about Fixvember on Codefresh Blog. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 14297
Modal Close icon
Modal Close icon