Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-the-future-of-jenkins-is-cloud-native-and-a-faster-development-pace-with-increased-stability
Prasad Ramesh
04 Sep 2018
4 min read
Save for later

The future of Jenkins is cloud native and a faster development pace with increased stability

Prasad Ramesh
04 Sep 2018
4 min read
Jenkins has been a success for more than a decade now mainly due to its extensibility, community and it being general purpose. But there are some challenges and problems in it which have become more pronounced now. Kohsuke Kawaguchi, the creator of Jenkins, is now planning to take steps to solve these problems and make the platform better. Challenges in Jenkins With growing competition in the continuous integration (CI), The following limitations in Jenkins come in the way of teams. Some of them discourage admins from using and installing plugins. Service instability: CI is a critical service nowadays. People are running bigger workloads, needing more plugins, and high availability. Services like instant messaging platforms need to be online all the time. Jenkins is unable to keep up with this expectation and a large instance requires a lot of overhead to keep it running. It is common for someone to restart Jenkins every day and that delays processes. Errors need to be contained to a specific area without impacting the whole service. Brittle Configuration: Installing/upgrading plugins and tweaking job settings have caused side effects. This makes admins lose confidence to make these changes safely. There is a fear that the next upgrade might break something and cause problems for other teams and affect delivery. Assembly required: Jenkins requires an assembly of service blocks to make it work as a whole. As CI has become mainstream, the users want something that can be deployed in a few clicks. Having too many choices is confusing and leads to uncertainty when assembling. This is not something that can be solved by creating more plugins. Reduced Development Velocity: It is difficult for a contributor to make a change that spans across multiple plugins. The tests do not give enough confidence to shop code; many of them do not run automatically and the coverage is not deep. Changes and steps to make Jenkins better There are two key efforts here, Cloud Native Jenkins and Jolt. Cloud native is a CI engine that runs on Kubernetes and has a different architecture, Jolt will continue in Jenkins 2 and add faster development pace with increased stability. Cloud Native Jenkins It is a sub-project in the context of Cloud Native SIG. It will use Kubernetes as runtime. It will have a new extensibility mechanism to retain what works and to continue the development of the the automation platform's ecosystem. Data on Cloud Managed Data Services to achieve high availability and horizontal scalability, alleviating admins from additional responsibilities. Configuration as Code and Jenkins Evergreen help with the brittleness. There are also plans to make Jenkins secure by default design and to continue with Jenkins X which has been received very well. The aim is to get things going in 5 clicks through easy integration with key services. Jolt in Jenkins Cloud Native Jenkins is not usable for everyone and targets only a particular set of functionalities. It also requires a platform which has a limited adoption today, so Jenkins 2 will be continued at a faster pace. For this Jolt in Jenkins is introduced. This is inspired by what happened to the development of Java SE; change in the release model by shedding off parts to move faster. There will a major version number change every couple of months. The platform needs to be largely compatible and the pace needs to justify any inconvenience put on the users. For more, visit the official Jenkins Blog. How to build and enable the Jenkins Mesos plugin Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes
Read more
  • 0
  • 0
  • 14093

article-image-jfrog-devops-artifact-management-platform-bags-165-million-series-d-funding
Sugandha Lahoti
05 Oct 2018
2 min read
Save for later

JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding

Sugandha Lahoti
05 Oct 2018
2 min read
JFrog the DevOps based artifact management platform has announced a $165 million Series D funding, yesterday. This funding round was led by Insight Venture Partners. The secured funding is expected to drive JFrog product innovation, support rapid expansion into new markets, and accelerate both organic and inorganic growth. Other new investors included Spark Capital and Geodesic Capital, as well as existing investors including Battery Ventures, Sapphire Ventures, Scale Venture Partners, Dell Technologies Capital and Vintage Investment Partners. Additional JFrog investors include JFrog Gemini VC Israel, Qumra Capital and VMware. JFrog transforms the way software is updated by offering an end-to-end, universal, highly-available software release platform. This platform is used for storing, securing, monitoring and distributing binaries for all technologies, including Docker, Go, Helm, Maven, npm, Nuget, PyPi, and more. As of now, according to the company, more than 5 million developers use JFrog Artifactory as their system of record when they build and release software. It also supports multiple deployment options, with its products available in a hybrid model, on-premise, and across major cloud platforms: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The announcement comes on the heels of Microsoft’s $7.5 billion purchase of coding-collaboration site GitHub earlier this year. Since its Series C funding round in 2016, the company has seen more than 500% sales growth and expanded its reach to over 4,500 customers, including more than 70% of the Fortune 100. It continues to add 100 new commercial logos per month and supports the world’s open source communities with its Bintray binary hub. Bintray powers 700K community projects distributing over 5.5M unique software releases that generate over 3 billion downloads a month. Read more about the announcement on JFrog official press release. OmniSci, formerly MapD, gets $55 million in series C funding. Microsoft’s GitHub acquisition is good for the open source community. Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency”
Read more
  • 0
  • 0
  • 13817

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 13689

article-image-kubernetes-1-12-released-with-general-availability-of-kubelet-tls-bootstrap-support-for-azure-vmss
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS

Melisha Dsouza
28 Sep 2018
3 min read
As promised by the Kubernetes team earlier this month, Kubernetes 1.12 now stands released! With a focus on internal improvements,  the release includes two highly-anticipated features- general availability of Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS). This promises to provide better security, availability, resiliency, and ease of use for faster delivery of production based applications. Let’s dive into the features of Kubernetes 1.12 #1 General Availability of Kubelet TLS Bootstrap The team has made the Kubelet TLS Bootstrap generally available. This feature significantly streamlines Kubernetes’ ability to add and remove nodes to the cluster. Cluster operators are responsible for ensuring the TLS assets they manage remain up-to-date and can be rotated in the face of security events. Kubelet server certificate bootstrap and rotation (beta) will introduce a process for generating a key locally and then issuing a Certificate Signing Request to the cluster API server to get an associated certificate signed by the cluster’s root certificate authority. As certificates approach expiration, the same mechanism will be used to request an updated certificate. #2 Stable Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Azure Virtual Machine Scale Sets (VMSS) allows users to create and manage a homogenous VM pool. This pool can automatically increase or decrease based on demand or a set schedule. Users can easily manage, scale, and load balance multiple VMs to provide high availability and application resiliency which will be ideal for large-scale applications that can run as Kubernetes workloads. The stable support will allow Kubernetes to manage the scaling of containerized applications with Azure VMSS. Users will have the ability to integrate the applications with cluster-autoscaler to automatically adjust the size of the Kubernetes clusters. #3 Other additional Feature Updates Encryption at rest via KMS is now in beta. It adds multiple encryption providers, including Google Cloud KMS, Azure Key Vault, AWS KMS, and Hashicorp Vault. These providers will encrypt data as it is stored to etcd. RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane. Topology aware dynamic provisioning is now in beta. Storage resources can now understand where they live. Configurable pod process namespace sharing enables users to configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. Vertical Scaling of Pods will help vary the resource limits on a pod over its lifetime. Snapshot / restore functionality for Kubernetes and CSI will provide standardized APIs design and add PV snapshot/restore support for CSI volume drivers To explore these features in depth, the team will be hosting a  5 Days of Kubernetes series next week. Users will be given a walkthrough of the following features: Day 1 - Kubelet TLS Bootstrap Day 2 - Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Day 3 - Snapshots Functionality Day 4 - RuntimeClass Day 5 - Topology Resources Additionally, users can join the members of the release team on November 6th at 10 am PDT in a webinar that will cover major features in this release. You can check out the release on GitHub. Additionally, if you would like to know more about this release, head over to Kubernetes official blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 13384

article-image-introducing-alpha-support-for-volume-snapshotting-in-kubernetes-1-12
Melisha Dsouza
10 Oct 2018
3 min read
Save for later

Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12

Melisha Dsouza
10 Oct 2018
3 min read
Kubernetes v1.12 now offers alpha support for volume snapshotting. This will allow users to create or delete volume snapshots, and natively create new volumes from a snapshot using the Kubernetes API. A snapshot represents a copy of a volume at that particular instant of time. This snapshot can be used to provision a new volume that can be pre-populated with the snapshot data or to restore the existing volume to a previous state. Importance of adding Snapshots to Kubernetes The main goal of the Kubernetes team is to create an abstraction layer between distributed systems applications and underlying clusters. The layer will ensure that application deployment requires no "cluster specific" knowledge. Snapshot operations are a critical functionality for many stateful workloads. For instance, a database administrator may want to snapshot a database volume before starting a database operation. By providing a standard way to trigger snapshot operations in the Kubernetes API, users don’t have to manually execute storage system specific operations around the Kubernetes API. They can instead incorporate snapshot operations in a cluster agnostic way into their tooling and policy assured that it will work against arbitrary Kubernetes clusters regardless of the underlying storage. These snapshot primitives help to develop advanced, enterprise-grade, storage administration features for Kubernetes which includes data protection, data replication, and data migration. 3 new API objects introduced by Kubernetes Volume Snapshots: #1 VolumeSnapshot The creation and deletion of this object depicts if a user wants to create or delete a cluster resource (a snapshot). It is used to request the creation of a snapshot for a specified volume. It gives the user information about snapshot operations like the timestamp at which the snapshot was taken and whether the snapshot is ready to use. #2 VolumeSnapshotContent This object is created by the CSI volume driver once a snapshot has been successfully created. It contains information about the snapshot including its ID. This object represents a provisioned resource on the cluster (a snapshot). Once a snapshot is created, the VolumeSnapshotContent object binds to the VolumeSnapshot- with a one to one mapping- for which it was created. #3 VolumeSnapshotClass This object created by cluster administrators describes how snapshots should be created. It includes the driver information, how to access the snapshot, etc. These Snapshot objects are defined as CustomResourceDefinitions (CRDs).  End users need to verify if a CSI driver that supports snapshots is deployed on their Kubernetes cluster. CSI Drivers that support snapshots will automatically install the required CRDs. Limitations of the alpha implementation of snapshots The alpha implementation does not support reverting an existing volume to an earlier state represented by a snapshot It does not support "in-place restore" of an existing PersistentVolumeClaim from a snapshot. Users can provision a new volume from a snapshot. However, updating an existing PVC to a new volume and reverting it back to an earlier state is not allowed. No snapshot consistency guarantees given beyond any of those provided by storage system An example of creating new snapshots and importing existing snapshots is explained well on the Kubernetes Blog. Head over to  the team's Concepts page or Github to find more official documentation of the snapshot feature. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 12925

article-image-openshift-3-9-released-ahead-of-planned-schedule
Gebin George
09 Apr 2018
2 min read
Save for later

OpenShift 3.9 released ahead of planned schedule

Gebin George
09 Apr 2018
2 min read
In an effort to sync their releases with Kubernetes, RedHat skipped 3.8 release and came up with version 3.9, for their very own container application platform i.e OpenShift. RedHat seems to be moving really quick with their OpenShift roadmap, with 3.10 release lined up in Q2 2018 (June). The primary takeaway from the accelerated release cycle of OpenShift is the importance of the tool in RedHat DevOps expansion. With dedicated support to cutting-edge tools like Docker and Kubernetes, OpenShift looks like a strong DevOps tool, which is here to stay. The OpenShift 3.9 release has quite a few exciting middleware updates, bug fixes, and service extensions. Let’s look at some of the enhancements in key areas:   Container Orchestration OpenShift has added Soft Image Pruning, wherein you don't have to remove the actual image, but need to just update the etcd storage file instead. Added support to deploy RedHat ClouForms on OpenShift container engine. Added features: OpenShift Container Platform template provisioning Offline OpenScapScans Alert management: You can choose Prometheus (currently in Technology Preview) and use it in CloudForms. Reporting enhancements Provider updates Chargeback enhancements UX enhancements The inclusion of CRI-O V1.9, a lightweight native Kubernetes run-time interface. Addition of CRI-O brings the following advancements: A minimal and secure architecture. Excellent scale and performance. The ability to run any Open Container Initiative (OCI) or docker image. Familiar operational tooling and commands. Storage Expand persistent volume claims online from {product-tile} for CNS glusterFS, Cinder, and GCE PD. CNS deployments are automated and CNS Un-install Playbook is added with the release of OpenShift 3.9 Developer Experience Improvements in Jenkins support, which intelligently predicts to pod memory before processing it. Updated CLI-plugins or binary extensions, which extends the default set of OC commands, allowing you to perform a new task. The BUILDCONFIG DEFAULTER now allows specifications now allows a toleration value, which is applied upon creation. For minor bug fixes and the complete release data, refer to  OpenShift Release Notes.  
Read more
  • 0
  • 0
  • 12619
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gitlab-11-0-released
Savia Lobo
25 Jun 2018
2 min read
Save for later

GitLab 11.0 released!

Savia Lobo
25 Jun 2018
2 min read
GitLab recently announced the release of GitLab 11.0 which includes major features such as the Auto DevOps and License Management; among other features. The Auto DevOps feature is generally available in GitLab 11.0. It is a pre-built, fully featured CI/CD pipeline that automates the entire delivery process. With this feature, one has to simply commit their code and Auto DevOps does the rest. This includes tasks such as building and testing the app; performing code quality, security, and license scans. One can also package, deploy and monitor their applications using Auto DevOps. Chris Hill, head of systems engineering for infotainment at Jaguar Land Rover, said, “We’re excited about Auto DevOps, because it will allow us to focus on writing code and business value. GitLab can then handle the rest; automatically building, testing, deploying, and even monitoring our application.” License Management automatically detects licenses of project's dependencies such as, Enhanced Security Testing of code, containers, and dependencies: GitLab 11.0 has an extended coverage of Static Analysis Security Testing (SAST) and  includes Scala and .Net. Kubernetes integration features: If one needs to debug or check on a pod, they can do so by reviewing the Kubernetes pod logs directly from GitLab's deployment board. Improved Web IDE:  One can view their CI/CD pipelines from the IDE and get immediate feedback if a pipeline fails. Switching tasks can be disruptive, so the updated Web IDE makes it easy to quickly switch to the next merge request, to create, improve, or review without leaving the Web IDE. Enhanced Epic and Roadmap views : GitLab 11.0 has an updated Epic/Roadmap navigation interface to make it easier to see the big images and make planning easier. Read more about GitLab 11.0 on its GitLab’s official website. GitLab’s new DevOps solution GitLab open sources its Web IDE in GitLab 10.7 The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab
Read more
  • 0
  • 0
  • 12584

article-image-puppet-announces-updates-in-a-bid-to-help-organizations-manage-their-automation-footprint
Richard Gall
03 May 2019
3 min read
Save for later

Puppet announces updates in a bid to help organizations manage their "automation footprint"

Richard Gall
03 May 2019
3 min read
There are murmurs on the internet that tools like Puppet are being killed off by Kubernetes. The reality is a little more complex. True, Kubernetes poses some challenges to various players in the infrastructure automation market, but they nevertheless remain important tools for engineers charged with managing infrastructure. Kubernetes is forcing this market to adapt - and with Puppet announcing new tools and features to its portfolio in Puppet Enterprise 2019.1 yesterday, this it's clear that the team are making the necessary strides to remain a key part of the infrastructure automation landscape. Update: This article was amended to highlight that Puppet Enterprise is a distinct product separate from Continuous Delivery for Puppet Enterprise. What's new for Puppet Enterprise 2019.1? There are two key elements to the Puppet announcement: enhanced integration with Puppet Bolt - an open source, agentless task runner - and improved capabilities with Continuous Delivery for Puppet Enterprise. Puppet Bolt Puppet Bolt, the Puppet team argue, offers a really simple way to get started with infrastructure automation "without requiring an agent installed on a remote target." The Puppet team explain that Puppet Bolt essentially allows users to expand the scope of what they can automate without losing the consistency and control that you'd expect when using a tool like Puppet. This has some significant benefits in the context of Kubernetes. Bryan Belanger, Principal Consultant at Autostructure, said "We love using Puppet Bolt because it leverages our existing Puppet roles and classifications allowing us to easily make changes to large groups of servers and upgrade Kubernetes clusters quicker, which is often a pain if done manually." Belanger continues, saying "with the help of Puppet Bolt, we were also able to fix more than 1,000 servers within five minutes and upgrade our Kubernetes clusters within four hours, which included coding and tasks." Continuous Delivery for Puppet Enterprise Updates to the Continuous Delivery product aim to make DevOps practices easier - the Puppet team are clearly trying to make it easier for organizations to empower their colleagues and continue to build a culture where engineers are not simply encouraged to be responsible for code deployment, but also able to do it with minimal fuss. Module Delivery Pipelines now mean modules can be independently deployed without blocking others, while Simplified Puppet Deployments aims to make it easier for engineers that aren't familiar with Puppet to "push simple infrastructure changes immediately and easily perform complex rolling deployments to a group of nodes in batches in one step." But there is also another dimension that aims to help engineers take pro-active steps to tackle resiliency and security issues. With Impact Analysis teams will be able to look at the potential impact of a deployment before it's done. Read next: “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel What's the big idea behind this announcement? The over-arching narrative that's coming from the top is about supporting teams to scale their DevOps processes. It's about making organizations' 'automation footprint' more manageable. "IT teams need a simple way to get started with automation and a solution that grows with them as their automation footprint grows," Matt Waxman, Head of Product at Puppet, explains. "You shouldn’t have to throw away your existing scripts or tools to scale automation across your organization. Organizations need a solution that is extensible — one that complements their current automation efforts and helps them scale beyond individuals to multiple teams." Puppet Enterprise 2019.1 will be out on general availability on May 7 2019. Learn more here.
Read more
  • 0
  • 0
  • 11906

article-image-gnu-shepherd-0-5-0-releases
Savia Lobo
27 Sep 2018
1 min read
Save for later

GNU Shepherd 0.5.0 releases

Savia Lobo
27 Sep 2018
1 min read
Yesterday, the GNU Daemon Shepherd community announced the release of GNU Shepherd 0.5.0. GNU Shepherd, formerly known as GNU dmd, is a service manager written in Guile and looks after the herd of system services. It provides a replacement for the service-managing capabilities of SysV-init (or any other init) with both a powerful and beautiful dependency-based system and a convenient interface. The GNU Shepherd 0.5.0 contains new features and bug fixes and was bootstrapped with tools including: Autoconf 2.69 Automake 1.16.1 Makeinfo 6.5 Help2man 1.47.6 Changes in GNU Shepherd 0.5.0 Services now have a ‘replacement’ slot In this version, restarting a service will also restart its dependent services When running as PID 1 on GNU/Linux, halt upon ctrl-alt-del Actions can now be invoked on services which are not in the current running state This version supports Guile 3.0 and users need to have Guile version>= 2.0.13 Unused runlevel code has been removed Some of the updated translations in this version include, es, fr, pt_BR, sv To know more about this release in detail, visit GNU official website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Network programming 101 with GAWK (GNU AWK) GNU Octave: data analysis examples  
Read more
  • 0
  • 0
  • 10885

article-image-idera-acquires-travis-ci-the-open-source-continuous-integration-solution
Sugandha Lahoti
24 Jan 2019
2 min read
Save for later

Idera acquires Travis CI, the open source Continuous Integration solution

Sugandha Lahoti
24 Jan 2019
2 min read
The popular open source continuous integration service Travis CI solution, has been acquired by Idera. Idera offers a number of B2B software solutions ranging from database administration to application development to test management. Travis CI will be joining Idera’s Testing Tools division, which also includes TestRail, Ranorex, and Kiuwan. Travis CI assured its users that the company will continue to be open source and a stand-alone solution under an MIT license. “We will continue to offer the same services to our hosted and on-premises users. With the support from our new partners, we will be able to invest in expanding and improving our core product”, said Konstantin Haase, a founder of Travis CI in a blog post. Idera will also keep the Travis Foundation running which runs projects like Rails Girls Summer of Code, Diversity Tickets, Speakerinnen, and Prompt. It’s not just a happy day for Travis CI. Travis CI will also bring it’s 700,000 users to Idera, and it’s high profile customers like IBM and Zendesk. Users are quick to note that this acquisition comes at a time when Tavis CI’s competitors like Circle CI, seem to be taking market share away from Travis CI. A comment on hacker news reads, “In a past few month I started to see Circle CI badges popping here and there for opensource repositories and anecdotally many internal projects at companies are moving to GitLab and their built-in CI offering. Probably a good time to sell Travis CI, though I'd prefer if they would find a better buyer.” Another user says, “Honestly, for enterprise users that is a good thing. In the hands of a company like Idera we can be reasonably confident that Travis will not disappear anytime soon” Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies
Read more
  • 0
  • 0
  • 10445
article-image-lxd-3-8-released-with-automated-container-snapshots-zfs-compression-support-and-more
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

LXD 3.8 released with automated container snapshots, ZFS compression support and more!

Melisha Dsouza
14 Dec 2018
5 min read
Yesterday, the LXD team announced the release of LXD 3.8. This is the last update for 2018, improving the previous version features as well as adding new upgrades to 3.8. LXD, also known as ‘Linux Daemon’ system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD is written in Go which is a free software and is developed under the Apache 2 license. LXD is secure by design in terms of unprivileged containers, resource restrictions and much more. Customers can use LXD from containers on their laptop to thousands of compute nodes. WIth advanced resource control and support for multiple storage backends, storage pools and storage volumes, LXD has been well received by the community. Features of LXD 3.8 #1 Automated container snapshots The new release includes three configuration keys to control automated snapshots and configure how their naming convention. snapshots.schedule uses a CRON pattern to determine when to perform the snapshot snapshots.schedule.stopped is a boolean used to control whether to snapshot stopped containers snapshots.pattern is a format string with pongo2 templating support used to set what the name of the snapshots should be when no name is given to a snapshot. This applicable to both, automated and unnamed, manually created snapshots. #2 Support for copy/move between projects Users can now copy or move containers between projects using the newly available  --target-project option added to both lxc copy and lxc move #3 cluster.https_address server option LXD 3.8 includes a new cluster.https_address option. This option will help users facilitate internal cluster communication, making it easy to prioritize and filter cluster traffic. Until recently, clustered LXD servers had to be configured to listen on a single IPv4 or IPv6 address and both the internal cluster traffic and regular client traffic used the same address. This new option with a write-once key holds the address used for cluster communication and cannot currently be changed without having to remove the node from the cluster. Users can now change the regular core.https_address on clustered nodes to any address they want, making it possible to use a completely different network for internal cluster communication. #4 Cluster image replication LXD 3.8 introduces automatic image replication. Prior to this update, images would only get copied to other cluster members as containers on those systems request them. The downside of this method was that if the image is only present on a single system and that system goes offline, then the image cannot be used until the system recovers. In LXD 3.8,  all manually created or imported images will be replicated on at least 3 systems. Images that are stored in the image store only as a cache entry do not get replicated. #5 security.protection.shift container option In previous versions, LXD had to rely on slow rewriting of all uid/gid on the filesystem whenever the container’s idmap changes. This can be dangerous when run on systems that are prone to sudden shutdowns as the operation cannot be safely resumed if interrupted partway. The newly introduced security.protection.shift configuration option will prevent any such remapping, instead making any action that would result in one fail until the key is unset. #6 Support for passing all USB devices All USB devices can be passed to a container by not specifying any filter, without specifying any vendorid or productid filter. Every USB device will be made visible to the container, including any device hotplugged after the fact. #7 CLI override of default project After reports from users that interacting with multiple projects can be tedious due to having to constantly use lxc project switch to switch the client between projects, LXD 3.8 now has made available a --project option available throughout the command line client, which lets users override the project for a particular operation. #8 Bi-directional rsync negotiation Recent LXD releases use the rsync feature negotiation where the source could tell the server what rsync features it’s using for the server to match them on the receiving end. LXD 3.8 introduces the reverse of that by having the LXD server indicate what it supports as part of the migration protocol, allowing for the source to restrict the features it uses. This will provide a robust migration when a newer LXD will be able to migrate containers out to an older LXD without running into rsync feature mismatches. #9 ZFS compression support The LXD migration protocol implements a detection and use of ZFS compression support when available. On combining with zpool compression, this can very significantly reduce the size of the migration stream. HackerNews was buzzing with positive remarks for this release, with users requesting more documentation on how to use LXD containers. Some users also have compared LXD containers to Docker and Kubernetes, preferring the former over the latter. In addition to these new upgrades, the release also fixes multiple bugs from the previous version. You can head over to Linuxcontainers.org for more insights on this news. Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem” The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 9969

article-image-kong-1-0-is-now-generally-available-with-grpc-support-updated-database-abstraction-object-and-more
Amrata Joshi
21 Dec 2018
4 min read
Save for later

Kong 1.0 is now generally available with gRPC support, updated Database abstraction object and more

Amrata Joshi
21 Dec 2018
4 min read
Yesterday, the team at Kong announced the general availability of Kong 1.0, a scalable, fast, open source microservice API gateway that manages hybrid and cloud-native architectures. Kong can be extended through plugins including authentication, traffic control, observability and more.The first stable version of Kong 1.0 was  launched earlier this year in September at the Kong summit. The Kong API  creates a Certificate authority which Kong nodes can use for establishing mutual TLS authentication with each other. It can balance traffic from mail servers and other TCP-based applications, from L7 to L4. What’s new in Kong 1.0? gRPC This release supports gRPC protocol alongwith REST. It is built on top of HTTP/2 and provides option for Kong users looking to connect east-west traffic with low overhead and latency. This helps in enabling Kong users to open more mesh deployments in hybrid environments. New Migrations Framework in Kong 1.0 This version of Kong introduces a new Database Abstraction Object (DAO), a framework that allows migrations from one database schema to another with nearly zero downtime. The new DAO helps users to upgrade their Kong cluster all at once, without the need of any manual intervention for upgrading each node. Plugin Development Kit (PDK) PDK, a set of Lua functions and variables can be used by custom-plugins for implementing logic on Kong. The plugins built with the PDK will be compatible with Kong versions 1.0 and above. PDK’s interfaces are much easier to use than the bare-bones ngx_lua API. It allows users to isolate plugin operations such as logging or caching. It is semantically versioned which helps in maintaining backward compatibility. Service Mesh Support Users can now easily deploy Kong as a standalone service mesh. A service mesh can help address the challenges of microservices in terms of security. It secures the services as it integrates multiple layers of security with Kong plugins. It also features secure communication at every step of the request lifecycle. Seamless Connections This release connects services in the mesh to services across all environments, platforms, and vendors. Kong 1.0 can be used to bridge the gap between cloud-native design and traditional architecture patterns. Robust plugin architecture This release comes with a robust plugin architecture that offers users unparalleled flexibility. Kong plugins provide key functionality and supports integrations with other cloud-native technologies including Prometheus, Zipkin, and many others. Kong’s plugins can now execute code in the new preread phase which improves performance. AWS Lambda and Azure FaaS Kong 1.0 comes with improvements to interactions with AWS Lambda and Azure FaaS, including Lambda Proxy Integration. The Azure Functions plugin can be used to filter out headers disallowed by HTTP/2 when proxying HTTP/1.1 responses to HTTP/2 clients. Deprecations in Kong 1.0 Core The API entity and related concepts such as the /apis endpoint have been removed from this release. Routes and Services are used instead. The old DAO implementation and the old schema validation library are removed. New Admin API Filtering now happens withURL path changes (/consumers/x/plugins) instead of querystring fields (/plugins?consumer_id=x) Error messages have been reworked in this release to be more consistent, precise and informative. The PUT method has been reimplemented.   Plugins The galileo plugin has been removed. Some internal modules, that were used by plugin authors before the introduction of the Plugin Development Kit (PDK) in 0.14.0 have been removed now. Internal modules that have been removed include, kong.tools.ip module, kong.tools.public module and  kong.tools.responses module. Major bug fixes SNIs (Server Name Indication) are now correctly paginated. With this release, null & default values are now handled better. Datastax Enterprise 6.X doesn't throw errors anymore. Several typos, style and grammar fixes have been made. The router doesn’t inject an extra / in certain cases. Read more about this release on Kong’s blog post. Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year
Read more
  • 0
  • 0
  • 9604

article-image-ibm-launches-nabla-containers-a-sandbox-more-secure-than-docker-containers
Savia Lobo
17 Jul 2018
4 min read
Save for later

IBM launches Nabla containers: A sandbox more secure than Docker containers

Savia Lobo
17 Jul 2018
4 min read
Docker, and container technology in general have gotten a buzzing response from developers over the globe. The container technology with some enticing features such as lightweight in nature, being DevOps focussed, etc. has gradually taken over virtual machines much recently. However, most developers and organizations out there still prefer using virtual machines as they fear containers are less secure than the VMs. Enter IBM’s Nabla containers. IBM recently launched its brand new container tech with claims of it being more secure than Docker or any other containers in the market. It is a sandbox designed for a strong isolation on a host. This means, these specialized containers would cut down OS system calls to a bare minimum with as little code as possible. This is expected to decrease the surface area available for an attack. What are the leading causes for security breaches in containers? IBM Research’s distinguished engineer, James Bottomley, highlights the two fundamental kinds of security problems affecting containers and virtual machines(VM): Vertical Attack Profile (VAP) Horizontal Attack Profile (HAP) Vertical Attack Profile or VAP includes code which is used for traversing in order to provide services right from input to database update to output, in a stack. A container-based Virtual infrastructure Similar to all other programs, this VAP code is prone to bugs. Greater the code one traverses, greater will be the chances of exposure to a security loophole. Hence, the density of these bugs varies. However, this profile is much benign, as the primary actors for the hostile security attacks are the cloud tenants and the Cloud Security Providers(CSPs), which come much more into a picture in the HAP. Horizontal Attack Profile or HAP are stack security holes exploits that can jump either into the physical server host or VMs. A HAP attack These exploits cause, what is called, a failure of containment. Here, one part of the Vertical Attack Profile belongs to the tenants (The guest kernel, guest OS and application) while the other part (the hypervisor and host OS) belongs to the CSPs. However, the CSP vertical part has an additional problem which is, any exploit present in this piece of stack can be used to jump onto either the host itself or any other tenant VMs running on the host. James also states that any Horizontal security failure or HAP is a potential business destroying event for the CSPs. So one has to take care of preventing such failures. On the other hand, the exploit occuring in the VAP owned by the tenant is seen as a tenant-only-problem. This problem is expected to be located and fixed by tenants only. This tells us that, the larger the profile( for instance CSPs) the greater the probability of being exploited. HAP breaches, however, are not that common. But, whenever they occur, they ruin the system. James has called HAPs as the "potentially business destroying events." IBM Nabla Containers can ease out the HAP attacks for you!! Nabla containers achieve isolation by reducing the surface for an attack on the host. Standard containers vs Nabla containers These containers make use of a library OS also known as unikernel techniques adapted from the Solo5 project. These techniques help Nabla containers to avoid system calls and simultaneously reduce the attack surface. The containers use only 9 system calls; the rest are blocked through a Linux seccomp policy. Internals of Nabla containers Per IBM Research, Nabla containers are more secure than the other container technologies including Docker, and Google’s gVisor (a container runtime sandbox), and even Kata Containers (an open-source lightweight VM to secure containers). Read more about IBM Nabla containers on the official GitHub website. Docker isn’t going anywhere AWS Fargate makes Container infrastructure management a piece of cake Create a TeamCity project [Tutorial]    
Read more
  • 0
  • 0
  • 8900
article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 8711

article-image-2019-upskilling-enterprise-devops-skills-report-gives-an-insight-into-the-devops-skill-set-required-for-enterprise-growth
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth

Melisha Dsouza
05 Mar 2019
3 min read
DevOps Institute has announced the results of the "2019 Upskilling: Enterprise DevOps Skills Report". The research and analysis for this report were conducted by Eveline Oehrlich, former vice president and research director at Forrester Research. The project was supported by founding Platinum Sponsor Electric Cloud, Gold Sponsor CloudBees and Silver Sponsor Lenovo. This report outlines the most valued and in-demand skills needed to achieve DevOps transformation within enterprise IT organizations of all sizes. It also gives an insight into the skills a DevOps professional should develop to help build a mindset and a culture for organizations and other individuals. According to Jayne Groll, CEO of DevOps Institute, “DevOps Institute is thrilled to share the research findings that will help businesses and the IT community understand the requisite skills IT practitioners need to meet the growing demand for T-shaped professionals. By identifying skill sets needed to advance the human side of DevOps, we can nurture the development of the T-shaped professional that is being driven by the requirement for speed, agility and quality software from the business.” Key findings from the report 55% of the survey respondents said that they first look for internal candidates when searching for DevOps team members and will look for external candidates only if they have not identified an internal candidate. The respondents agreed that automation skills (57%), process skills (55%) and soft skills (53%) are the most important must-have skills On being asked about which job title(s) companies recently hired (or are planning to hire), the survey depicted: DevOps Engineer/Manager, 39%; Software Engineer, 29%; DevOps Consultant, 22%; Test Engineer, 18%; Automation Architect, 17%; and Infrastructure Engineer, 17%. Other recruits included CI/CD Engineers, 16%; System Administrators, 15%; Release Engineers/Managers, 13%; and Site Reliability Engineers, 10% Functional skills and key technical skills when combined, complement the soft skills required to create qualified  DevOps engineers. Automation process and soft skills are the “must-have” skills for a DevOps engineer. Process skills are needed for intelligent automation.  Another key functional skill is IT Operations. Security comes in second. Business skills are most important to leaders, but not as much to individual contributors. Cloud and analytical knowledge are the top technical skills. Recruiting for DevOps is on the rise. Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report The following figure shows the priorities across the top skill categories relative to the key roles surveyed: Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report Oehrlich also said in a statement that Hiring managers see a DevOps professional as a creative, knowledge-sharing, eager-to-learn individual with shapeable skill sets. Andre Pino, vice president of marketing, CloudBees said in a statement s that “The survey results show the importance for developers and managers to have the right skills that empower them to meet business objectives and have a rewarding career in our fast-paced industry.” You can check out the entire report for more insights on this news. Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution
Read more
  • 0
  • 0
  • 8492
Modal Close icon
Modal Close icon