Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-kubernetes-1-11-is-here
Vijin Boricha
28 Jun 2018
3 min read
Save for later

Kubernetes 1.11 is here!

Vijin Boricha
28 Jun 2018
3 min read
This is the second release of Kubernetes in 2018. Kubernetes 1.11 comes with significant updates on features that revolve around maturity, scalability, and flexibility of Kubernetes.This newest version comes with storage and networking enhancements with which it is possible to plug-in any kind of infrastructure (Cloud or on-premise), into the Kubernetes system. Now let's dive into the key aspects of this release: IPVS-Based In-Cluster Service Load Balancing Promotes to General Availability IPVS consist of a simpler programming interface than iptable and delivers high-performance in-kernel load balancing. In this release it has moved to general availability where is provides better network throughput, programming latency, and scalability limits. It is not yet the default option but clusters can use it for production traffic. CoreDNS Graduates to General Availability CoreDNS has moved to general availability and is now the default option when using kubeadm. It is a flexible DNS server that directly integrates with the Kubernetes API. In comparison to the previous DNS server CoreDNS has lesser moving pasts as it is a single process that creates custom DNS entries to supports flexible uses cases. CoreDNS is also memory-safe as it is written in Go. Dynamic Kubelet Configuration Moves to Beta It has always been difficult to update Kubelet configurations in a running cluster as Kubelets are configured through command-line flags. With this feature moving to Beta, one can configure Kubelets in a live cluster through the API server. CSI enhancements Over the past few releases CSI (Container Storage Interface) has been a major focus area. This service was moved to Beta in version 1.10. In this version, the Kubernetes team continues to enhance CSI with a number of new features such as: Alpha support for raw block volumes to CSI Integrates CSI with the new kubelet plugin registration mechanism Easier to pass secrets to CSI plugins Enhanced Storage Features This release introduces online resizing of Persistent Volumes as an alpha feature. With this feature users can increase the PVs size without terminating pods or unmounting the volume. Users can update the PVC to request a new size and kubelet can resize the file system for the PVC. Dynamic maximum volume count is introduced as an alpha feature. With this new feature one can enable in-tree volume plugins to specify the number of volumes to be attached to a node, allowing the limit to vary based on the node type. In the earlier version the limits were configured through an environment variable. StorageObjectInUseProtection feature is now stable and prevents issues from deleting a Persistent Volume or a Persistent Volume Claim that is integrated to an active pod. You can know more about Kubernetes 1.11 from Kubernetes Blog and this version is available for download on GitHub. To get started with Kubernetes, check out our following books: Learning Kubernetes [Video] Kubernetes Cookbook - Second Edition Mastering Kubernetes - Second Edition Related Links VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Rackspace now supports Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads
Read more
  • 0
  • 0
  • 19608

article-image-microsoft-announces-azure-devops-makes-azure-pipelines-available-on-github-marketplace
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace

Melisha Dsouza
11 Sep 2018
4 min read
Microsoft is rebranding Visual Studio Team Services(VSTS) to Azure DevOps along with  Azure DevOps Server, the successor of Team Foundation Server (TFS). Microsoft understands that DevOps has become increasingly critical to a team’s success. The re-branding is done to achieve the aim of shipping higher quality software in a short span of time. Azure DevOps supports both public and private cloud configurations. The services are open and extensible and designed to work with any type of application, framework, platform, or cloud. Since Azure DevOps services work great together, users can gain more control over their projects. Azure DevOps is free for open source projects and small projects including up to five users. For larger teams, the cost ranges from $30 per month to $6,150 per month, depending upon the number of users. VSTS users will be upgraded into Azure DevOps projects automatically without any loss of functionally. URLs will be changed from abc.visualstudio.com to dev.azure.com/abc. Redirects from visualstudio.com URLs will be supported to avoid broken links. New users will get the update starting 10th September 2018, and existing users can expect the update in coming months. Key features in Azure DevOps: #1 Azure Boards Users can keep track of their work at every development stage with Kanban boards, backlogs, team dashboards, and custom reporting. Built-in scrum boards and planning tools help in planning meetings while gaining new insights into the health and status of projects with powerful analytics tools. #2 Azure Artifacts Users can easily manage Maven, npm, and NuGet package feeds from public and private sources. Code storing and sharing across small teams and large enterprises is now efficient thanks to Azure Artifacts. Users can Share packages, and use built-in CI/CD, versioning, and testing. They can easily access all their artifacts in builds and releases. #3 Azure Repos Users can enjoy unlimited cloud-hosted private Git repos for their projects.  They can securely connect with and push code into their Git repos from any IDE, editor, or Git client. Code-aware searches help them find what they are looking for. They can perform effective Git code reviews and use forks to promote collaboration with inner source workflows. Azure repos help users maintain a high code quality by requiring code reviewer sign off, successful builds, and passing tests before pull requests can be merged. #4 Azure Test Plans Users can improve their code quality using planned and exploratory testing services for their apps. These Test plans help users in capturing rich scenario data, testing their application and taking advantage of end-to-end traceability. #5 Azure Pipelines There’s more in store for VSTS users. For a seamless developer experience, Azure Pipelines is also now available in the GitHub Marketplace. Users can easily configure a CI/CD pipeline for any Azure application using their preferred language and framework. These Pipelines can be built and deployed with ease. They provide users with status reports, annotated code, and detailed information on changes to the repo within the GitHub interface. The pipelines Work with any platform- like Azure, Amazon Web Services, and Google Cloud Platform. They can run on apps with operating systems, including Android, iOS, Linux, macOS, and Windows systems. The Pipelines are free for open source projects. Microsoft has tried to update user experience by introducing these upgrades. Are you excited yet? You can learn more at the Microsoft live Azure DevOps keynote today at 8:00 a.m. Pacific and a workshop with Q&A on September 17 at 8:30 a.m. Pacific on Microsoft’s events page. You can read all the details of the announcement on Microsoft’s official Blog. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence 8 ways Artificial Intelligence can improve DevOps  
Read more
  • 0
  • 0
  • 19585

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 19539

article-image-red-hat-open-sources-project-quay-container-registry
Savia Lobo
13 Nov 2019
2 min read
Save for later

Red Hat open sources Project Quay container registry

Savia Lobo
13 Nov 2019
2 min read
Yesterday, Red Hat introduced the open source Project Quay container registry, which is the upstream project representing the code that powers Red Hat Quay and Quay.io. Open-sourced as a Red Hat commitment, Project Quay “represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat,” the official post reads. Red Hat Quay container image registry provides storage and enables users to build, distribute, and deploy containers. It will also help users to gain more security over their image repositories with automation, authentication, and authorization systems. It is compatible with most container environments and orchestration platforms and is also available as a hosted service or on-premises. Launched in 2013, Quay grew in popularity due to its focus on developer experience and highly responsive support and added capabilities such as image rollback and zero-downtime garbage collection. Quay was acquired by CoreOS in 2014 with a mission to secure the internet through automated operations. Shortly after the acquisition, the company released the on-premise offering of Quay, which is presently known as Red Hat Quay. The Quay team also created and integrated the Clair open source container security scanning project since 2015. It is directly built into Project Quay. Clair enables the container security scanning feature in Red Hat Quay, which helps users identify known vulnerabilities in their container registries. Open-sourced as part of Project Quay, both Quay, and Clair code bases will help cloud-native communities to lower the barrier to innovation around containers, helping them to make containers more secure and accessible. Project Quay contains a collection of open-source software licensed under Apache 2.0 and other open-source licenses. It follows an open-source governance model, with a maintainer committee. With an open community, Red Hat Quay and Quay.io users can benefit from being able to work together on the upstream code. Project Quay will be officially launched at the OpenShift Commons Gathering on November 18 in San Diego at KubeCon 2019. To know more about this announcement, you can read Red Hat’s official blog post. Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastly, Intel and Red Hat partnership After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption
Read more
  • 0
  • 0
  • 19320

article-image-data-measured-in-terms-of-real-aggregate-value-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Data Measured in Terms of Real Aggregate Value from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
The post Data Measured in Terms of Real Aggregate Value appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19218

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 19126
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-neuvector-releases-security-policy-as-code-to-help-devops-teams-automate-container-security-by-using-crds
Sugandha Lahoti
19 Nov 2019
2 min read
Save for later

Neuvector releases “Security Policy as Code” to help DevOps teams automate container security by using CRDs

Sugandha Lahoti
19 Nov 2019
2 min read
NeuVector has released a new Security Policy as code capability for Kubernetes workloads. This release will automate container security for DevOps teams by using Kubernetes Custom Resource Definitions (CRDs). As security policies can be defined, managed, and automated during the DevOps process, teams will be able to quickly deliver secure cloud-native apps. These security policies can be implemented using CRDs to deploy customized resource configurations via YAML files. As these security policies are defined as code, they are version-tracked and built for easy automation. Teams can easily migrate security policies across Kubernetes clusters (or from staging to production environments) and manage versions of security policies tied to specific application versions. “By introducing our industry-first Security Policy as Code for Kubernetes workloads, we’re excited to provide DevOps and DevSecOps teams with even more control to automate safe behaviors and ensure their applications remain secure from ever-increasing threat vectors,” explains Gary Duan, CTO, NeuVector. “We continue to build out new capabilities sought by customers – such as DLP, multi-cluster management, and, with today’s release, CRD support. Our mission is acutely focused on raising the bar for container security by offering a complete cloud-native solution for the entire application lifecycle.” Features of NeuVector’s Security Policy as code Captures network rules, protocols, processes, and file activities that are allowed for the application. Permits allowed network connections between services enforced by application protocol (layer 7) inspection. Allows or prevents external or ingress connections as warranted. Sets the “protection mode” of the application to either Monitor mode (alerting only) or Protect mode (blocking all suspicious activity). Supports integration with Open Policy Agent (OPA) and other security policy management tools. Allows DevOps and security teams to define application policies at different hierarchies such as per-service rules defined by DevOps and global rules defined by centralized security teams. It is extensible so as to support future expansion of security policy as code to admission control rules, DLP rules, response rules, and other NeuVector enforcement policies. Head on to Neuvector’s blog for more details on Security Policy as Code feature. Further details about this release will be shared at KubeCon + CloudNativeCon North America 2019. Chaos engineering comes to Kubernetes thanks to Gremlin CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities.
Read more
  • 0
  • 0
  • 18705

article-image-opsera-simplifies-building-of-devops-pipelines-from-devops-com
Matthew Emerick
15 Oct 2020
1 min read
Save for later

Opsera Simplifies Building of DevOps Pipelines from DevOps.com

Matthew Emerick
15 Oct 2020
1 min read
Fresh off raising $4.3 million in funding, Opsera today launched a namesake platform that enables IT teams to orchestrate both the tools employed by developers as well as the pipelines that make up a DevOps process. Company co-founder Chandra Ranganathan said the Opsera platform automates setup of DevOps pipelines using a declarative approach that doesn’t […] The post Opsera Simplifies Building of DevOps Pipelines appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 18525

article-image-nvidia-gpus-offer-kubernetes-for-accelerated-deployments-of-artificial-intelligence-workloads
Savia Lobo
21 Jun 2018
2 min read
Save for later

Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads

Savia Lobo
21 Jun 2018
2 min read
Nvidia recently announced that they will make Kubernetes available on its GPUs, at the Computer Vision and Pattern Recognition (CVPR) conference. Although it is not generally available, developers will be allowed to use this technology in order to test the software and provide their feedback. Source: Kubernetes on Nvidia GPUs Kubernetes on NVIDIA GPUs will allow developers and DevOps engineers to build and deploy a scalable GPU-accelerated deep learning training. It can also be used to create inference applications on multi-cloud GPU clusters. Using this novel technology, developers can handle the growing number of AI applications and services. This will be possible by automating processes such as deployment, maintenance, scheduling and operation of GPU-accelerated application containers. One can orchestrate deep learning and HPC applications on heterogeneous GPU clusters. It also includes easy-to-specify attributes such as GPU type and memory requirement. It also offers integrated metrics and monitoring capabilities for analyzing and improving GPU utilization on clusters. Interesting features of Kubernetes on Nvidia GPUs include: GPU support in Kubernetes can be used via the NVIDIA device plugin One can easily specify GPU attributes such as GPU type and memory requirements for deployment in heterogeneous GPU clusters Visualizing and monitoring GPU metrics and health with an integrated GPU monitoring stack of NVIDIA DCGM , Prometheus and Grafana Support for multiple underlying container runtimes such as Docker and CRI-O Officially supported on all NVIDIA DGX systems (DGX-1 Pascal, DGX-1 Volta and DGX Station) Read more about this exciting news on Nvidia Developer blog NVIDIA brings new deep learning updates at CVPR conference Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 18352

article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 18294
article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 18262

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 18214

article-image-microsoft-announces-azure-devops-bounty-program
Prasad Ramesh
18 Jan 2019
2 min read
Save for later

Microsoft announces Azure DevOps bounty program

Prasad Ramesh
18 Jan 2019
2 min read
Yesterday, the Microsoft Security Response Center (MSRC) announced the launch of the Azure DevOps Bounty program. This is a program launched to solidify the security provided to Azure DevOps customers. They are offering rewards up to US$20,000 if you can find eligible vulnerabilities in Azure DevOps online and Azure DevOps server. The bounty rewards range from $500 to $20,000 US. The reward will depend on Microsoft’s discretion on the severity and impact of a vulnerability. It will also depend on the quality of the submission subject to their bounty terms and conditions. Products in focus of this program are Azure DevOps services which was previously known as Visual Studio Team Services and the latest versions of Azure DevOps Server and Team Foundation Server. The goal of the program is to find any eligible vulnerabilities that may have a direct security impact on the customer base. For a submission to be eligible, it should fulfil the following criteria: Identifying a previously unreported vulnerability in one of the services or products. The web application vulnerabilities must impact supported browsers for Azure DevOps server, services, or plug-ins. The submission should have documented steps that are clear and reproducible. It can be text or video. Any necessary information to quickly reproduce and understand the issue can result in faster response and higher rewards. Any submissions that Microsoft thinks are not eligible in this criteria may be rejected. You can send your submissions to secure@microsoft.com with the help of bug submission guidelines. Participants are requested to use the Coordinated Vulnerability Disclosure when reporting the vulnerabilities. Note that there are no restrictions on how many vulnerabilities you can report or the rewards for it. When there are multiple submissions, the first one will be chosen for the reward. For more details about the eligible vulnerabilities and the Microsoft Azure DevOps bounty program, visit the Microsoft website. 8 ways Artificial Intelligence can improve DevOps Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 18135
article-image-docker-enterprise-edition-2-0-released
Gebin George
18 Apr 2018
3 min read
Save for later

What's new in Docker Enterprise Edition 2.0?

Gebin George
18 Apr 2018
3 min read
Docker Enterprise Edition 2.0 was released yesterday. The major focus of this new release (and the platform as a whole) is speeding up multi-cloud initiatives and automating the application delivery model, that go hand-in hand with DevOps and Agile philosophy. Docker has become an important tool for businesses in a very short space of time. With Docker EE 2.0, it looks like Docker will consolidate its position as the go-to containerization tool for enterprise organizations. Key features of Docker Enterprise Edition 2.0 Let’s look at some of the key capabilities included in Docker EE 2.0 release. Docker EE 2.0 is incredibly flexible  Flexibility is one of the biggest assets of Docker Enterprise Edition as today’s software delivery ecosystem demands freedom of choice. Organizations that are building applications on different platforms, using varied set of tools, deploying on different infrastructures and running them on different set of platforms require a huge amount of flexibility. Docker EE has addressed this concern with the following capabilities: Multi-Linux, Multi-OS, Multi-Cloud Many organizations have adopted a Hybrid cloud or Multi-cloud strategy, and build applications on different operating systems. Docker EE is registered across all the popular set of operating systems such as Windows, all the popular Linux distributions, Windows Server, and also on popular public clouds, enabling the users to deploy applications flexibly, wherever required. Docker EE 2.0 is interoperable with Docker Swarm and Kubernetes Container orchestration forms the core of DevOps and the entire ecosystem of containers revolve around Swarm or Kubernetes. Docker EE allows flexibility is switching between both these tools for application deployment and orchestration. Applications deployed on Swarm today, can be easily migrated to Kubernetes using the same compose file, making the life of developers simpler. Accelerating agile with Docker Enterprise Edition 2.0 Docker EE focuses on monitoring and managing containers to much greater extent than the open source version of Docker. The Enterprise Edition has specialized management and monitoring platform for looking after Kubernetes cluster and also has access to Kubernetes API, CLI and interfaces. Cluster management made simple: Easy-to-use cluster management services: Basic single line commands for adding cluster High availability of management plane Access to consoles and logs Securing configurations Secure application zones: With swift integration with corporate LDAPs and Active Directory system, we can divide a single cluster logically and physically into different teams. This seems to be the most convenient way to assign new namespaces to Kubernetes clusters. Layer 7 routing for Swarm: The new interlock 2.0 architecture provides new and optimized enhancements for network routing in Swarm. For more information on interlock architecture, refer the official Docker blog. Kubernetes: All the core components of Kubernetes environment like APIs, CLIs are available for users in a CCNF- conformant Kubernetes stack. There were few more enhancements related to the supply chain and security domains. For the complete set of improvements to Docker, check out the official Docker EE documentation.
Read more
  • 0
  • 0
  • 18102

article-image-jumpcloud-launches-new-integrations-with-slack-salesforce-github-atlassian-and-aws-from-devops-com
Matthew Emerick
15 Oct 2020
1 min read
Save for later

JumpCloud Launches New Integrations with Slack, Salesforce, GitHub, Atlassian, and AWS from DevOps.com

Matthew Emerick
15 Oct 2020
1 min read
User identity lifecycle management across multiple apps from single cloud directory platform saves IT hours of onboarding / offboarding work  LOUISVILLE, CO – Oct. 15, 2020 – JumpCloud today announced new integrations that provide IT admins easier user identity lifecycle management across multiple applications from a single platform. These new integrations with Slack, Salesforce, Atlassian, GitHub, and AWS provide streamlined user management […] The post JumpCloud Launches New Integrations with Slack, Salesforce, GitHub, Atlassian, and AWS appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 17951
Modal Close icon
Modal Close icon