Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 22746

article-image-docker-announces-docker-desktop-enterprise
Savia Lobo
05 Dec 2018
3 min read
Save for later

Docker announces Docker Desktop Enterprise

Savia Lobo
05 Dec 2018
3 min read
Yesterday, at DockerCon Europe 2018, the Docker community announced the Docker Desktop Enterprise, an easy, fast, and a secure way to build production-ready containerized applications. Docker Desktop Enterprise Docker Desktop Enterprise is a new addition to Docker’s desktop product portfolio, which currently includes the free Docker Desktop Community products for MacOS and Windows. The Docker Desktop Enterprise version enables developers to work with the frameworks and languages they are comfortable with. It will also assist IT teams to safely configure, deploy, and manage development environments while adhering to corporate standards practices. Hence the enterprise version enables organizations to quickly move containerized applications from development to production and reduce their time to market. Features of Docker Desktop Enterprise Enterprise Manageability With Docker Desktop Enterprise, IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production. For the IT team, the Docker Desktop Enterprise is packaged as standard MSI (Win) and PKG (Mac) distribution files. These files work with existing endpoint management tools with lockable settings via policy files. This edition also provides developers with ready to code, customized and approved application templates. Enterprise Deployment & Configuration Packaging IT desktop admins can deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. Desktop administrators can also enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience. Application architects provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Increase Developer Productivity and Ship Production-ready Containerized Applications Developers can quickly use company-provided application templates that instantly replicates production-approved application configurations on the local desktop by using the configurable version packs. With these version packs, developers can now synchronize desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. No Docker CLI commands are required to get started with Configurable Version Packs. Developers can also use the Application Designer interface, template-based workflows for creating containerized applications. If one has never launched a container before, the Application Designer interface provides the foundational container artifacts and user’s organization’s skeleton code to help users get started with containers in minutes. Read more about Docker Desktop Enterprise here. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Zeit releases Serverless Docker in beta  
Read more
  • 0
  • 0
  • 22734

article-image-microsoft-open-sources-project-zipline-its-data-compression-algorithm-and-hardware-for-the-cloud
Natasha Mathur
15 Mar 2019
3 min read
Save for later

Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud

Natasha Mathur
15 Mar 2019
3 min read
Microsoft announced that it is open-sourcing its new cutting-edge compression technology, called Project Zipline, yesterday. As a part of this open-source release, Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) has been made available. Apart from the announcement of Project Zipline, the Open Compute Project (OCP) Global Summit 2019 also started yesterday in San Jose. In the summit, the latest innovations that can make hardware more efficient, flexible, and scalable are shared. Microsoft states that its journey with OCP began in 2014 when it joined the foundation and contributed the server and data center designs that power its global Azure Cloud. Moreover, Microsoft contributes innovations to OCP every year at the summit. Microsoft has again decided to contribute Project Zipline this year. “This contribution will provide collateral for integration into a variety of silicon components across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading”, states Microsoft team. Project Zipline is aimed to optimize the hardware implementation for different types of data on cloud storage workloads. Microsoft has been able to achieve higher compression ratios, higher throughput, and lower latency than the other algorithms currently available. This allows for compression without compromise as well as data processing for different industry usage models (from cloud to edge). Microsoft’s Project Zipline compression algorithm produces great results with up to 2X high compression ratios as compared to the commonly used Zlib-L4 64KB model. These enhancements, in turn, produce direct customer benefits for cost savings and allow access to petabytes or exabytes of capacity in a cost-effective way for the customers. Project Zipline has also been optimized for a large variety of datasets, and Microsoft’s release of RTL allows hardware vendors to use the reference design that offers the highest compression, lowest cost, and lowest power in an algorithm. Project Zipline is available to the OCP ecosystem, so vendors can contribute further to benefit Azure and other customers. Microsoft team states that this contribution towards open source will set a “new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level”. In the future, Microsoft expects Project Zipline compression technology to enter different market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices. For more information, check out the official Microsoft announcement. Microsoft open sources the Windows Calculator code on GitHub Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issue Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models
Read more
  • 0
  • 0
  • 22611

article-image-what-to-expect-from-vsphere-6-7
Vijin Boricha
11 May 2018
3 min read
Save for later

What to expect from vSphere 6.7

Vijin Boricha
11 May 2018
3 min read
VMware has announced the latest release of the industry-leading virtualization platform vSphere 6.7. With vSphere 6.7, IT organizations can address key infrastructure demands like: Extensive growth in quantity and diversity of applications delivered Increased adoption of hybrid cloud environments Global expansion of data centers Robust infrastructure and application security Let’s take a look at some of the key capabilities of vSphere 6.7: Effortless and Efficient management: vSphere 6.7 is built on the industrial innovations delivered by vSphere 6.5, which advances customer experience to a another level. With vSphere 6.7 you can leverage management simplicity, operational efficiency, and faster time to market, all at scale. It comes with an enhanced vCenter Server Appliance (vCSA), new APIs that improve multiple vCenters deployments, which results in easier management of vCenter Server Appliance, as well as backup and restore. Customers can now link multiple vCenters and have seamless visibility across their environment without external platform services or load balancers dependencies. Extensive Security capabilities: vSphere 6.7 has enhanced its security capabilities from vSphere 6.5. It has added support for Trusted Platform Module (TPM) 2.0 hardware devices and has also introduced Virtual TPM 2.0, where you will notice significant enhancements in both the hypervisor and the guest operating system security. With this capability VMs and hosts cannot be tampered, preventing loading of unauthorized components and this enables desired guest operating system security features. With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage, enabling encrypted vMotion across different vCenter instances. vSphere 6.7 has also extended its security features keeping in mind the collaboration between VMware and Microsoft ensuring secured Windows VMs on vSphere. Universal Application Platform: vSphere is now a universal application platform that supports existing mission critical applications along with new workloads such as 3D Graphics, Big Data, Machine Learning, Cloud-Native and more. It has also extended its support to some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads. With collaboration of VMware and Nvidia, vSphere 6.7 has further extended its support for GPUs by virtualizing Nvidia GPUs for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With these enhancements, customers are now able to better lifecycle management of hosts, reducing disruption for end-users. VMware plans to invest more in this area in order to bring full vSphere support to GPUs in future releases. Hybrid Cloud Experience is now flawless: Since customers have started looking for hybrid cloud options vSphere 6.7 introduces vCenter Server Hybrid Linked Mode. It makes customers have a unified manageability and visibility across an on-premises vSphere environments running on similar versions and a VMware Cloud on AWS environment, running on a different version of vSphere. To ensure seamless hybrid cloud experience, vSphere 6.7 delivers a new capability, called Per-VM EVC which allows for seamless migration across different CPUs. This is only an overview of the key capabilities of vSphere 6.7. You can know more about this release from VMware vSphere Blog and VMware release. Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) VMware vSphere storage, datastores, snapshots The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 22536

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 22405

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 22304
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 22159

article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 21906

article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 21866

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 21843
article-image-aws-will-be-sponsoring-the-rust-project
Savia Lobo
15 Oct 2019
3 min read
Save for later

AWS will be sponsoring the Rust Project

Savia Lobo
15 Oct 2019
3 min read
Yesterday, AWS announced that it is sponsoring the popular Rust programming language. Rust has seen a lot of developments in AWS as it is used for various performance-sensitive components in its popular services such as Lambda, EC2, and S3. Tech giants such as Google, Microsoft, and Mozilla also use Rust for writing and maintaining fast, reliable, and efficient code. Alex Crichton, Rust Core Team Member says, “We’re thrilled that AWS, which the Rust project has used for years, is helping to sponsor Rust’s infrastructure. This sponsorship enables Rust to sustainably host infrastructure on AWS to ship compiler artifacts, provide crates.io crate downloads, and house automation required to glue all our processes together. These services span a myriad of AWS offerings from CloudFront to EC2 to S3. Diversifying the sponsorship of the Rust project is also critical to its long-term success, and we’re excited that AWS is directly aiding this goal.” Why AWS chose Rust Rust project maintainers say the reason AWS chose Rust is due to its blazingly fast and memory-efficient performance; its rich type system and ownership model guarantee memory-safety and thread-safety; its great documentation, its friendly compiler with useful error messages, and top-notch tooling; and many other amazing features. Rust has also been voted as the “Most Loved Language” in Stack Overflow’s survey for the past four years. Rust also has an inclusive community along with top-notch libraries such as: Serde, for serializing and deserializing data. Rayon, for writing parallel & data race-free code. Tokio/async-std, for writing non-blocking, low-latency network services. tracing, for instrumenting Rust programs to collect structured, event-based diagnostic information. Rust too uses AWS services Rust project uses AWS services to: Store release artifacts such as compilers, libraries, tools, and source code on S3. Run ecosystem-wide regression tests with Crater on EC2. Operate docs.rs, a website that hosts documentation for all packages published to the central crates.io package registry. The AWS community is excited to include Rust in their community. It would be interesting to see the combination of Rust and AWS in their future implementations. Few users are confused about the nature of this sponsorship. A Redditor commented, “It's not clear exactly what this means - is it about how AWS provides S3/EC2 services for free to the Rust project already (which IIRC has been ongoing for some time), or is it an announcement of something new ($$$ or developer time being contributed?)?” To know about this announcement in detail, read AWS’ official blog post. Amazon announces improved VPC networking for AWS Lambda functions Reddit experienced an outage due to an issue in its hosting provider, AWS Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol
Read more
  • 0
  • 0
  • 21728

article-image-kubernetes-1-16-releases-with-endpoint-slices-general-availability-of-custom-resources-and-other-enhancements
Vincy Davis
19 Sep 2019
4 min read
Save for later

Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements

Vincy Davis
19 Sep 2019
4 min read
Yesterday, the Kubernetes team announced the availability of Kubernetes 1.16, which consists of 31 enhancements: 8 moving to stable, 8 is beta, and 15 in alpha. This release contains a new feature called Endpoint Slices in alpha to be used as a scalable alternative to Endpoint resources. Kubernetes 1.16 also contains major enhancements like custom resources, overhauled metrics and volume extension. It also brings additional improvements like the general availability of custom resources and more. Extensions like extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs are deprecated in this version. This is Kubernetes' third release this year. The previous version Kubernetes 1.15 released three months ago. It accorded features like extensibility around core Kubernetes APIs and cluster lifecycle stability and usability improvements. Introducing Endpoint Slices in Kubernetes 1.16 The main goal of Endpoint Slices is to increase the scalability for Kubernetes Services. With the existing Endpoints, a single resource had to include all the network endpoints making the corresponding Endpoints resources large and costly. Also, when an Endpoints resource is updated, all the pieces of code watching the Endpoints required a full copy of the resource. This became a tedious process when dealing with a big cluster. With Endpoint Slices, the network endpoints for a Service are split into multiple resources by decreasing the amount of data required for updates. The Endpoint Slices are restricted to 100 endpoints each, by default. The other goal of Endpoint Slices is to provide extensible and useful resources for a variety of implementations. Endpoint Slices will also provide flexibility for address types. The blog post states, “An initial use case for multiple addresses would be to support dual stack endpoints with both IPv4 and IPv6 addresses.”  As the feature is available in alpha only, it is not enabled by default in Kubernetes 1.16. Major enhancements in Kubernetes 1.16 General availability of Custom Resources With Kubernetes 1.16, CustomResourceDefinition (CRDs) is generally available, with apiextensions.k8s.io/v1, as it contains the integration of API evolution in Kubernetes. CRDs were previously available in beta. It is widely used as a Kubernetes extensibility mechanism. In the CRD.v1, the API evolution has a ‘defaulting’ support by default. When defaulting is  combined with the CRD conversion mechanism, it will be possible to build stable APIs over time. The blog post adds, “Updates to the CRD API won’t end here. We have ideas for features like arbitrary subresources, API group migration, and maybe a more efficient serialization protocol, but the changes from here are expected to be optional and complementary in nature to what’s already here in the GA API.” Overhauled metrics In the earlier versions, the global metrics registry was extensively used by the Kubernetes to register exposed metrics. In this latest version, the metrics registry has been implemented, thus making the Kubernetes metrics more stable and transparent. Volume Extension This release contains many enhancements to volumes and volume modifications. The volume resizing support in (Container Storage Interface) CSI specs has moved to beta, allowing the CSI spec volume plugin to be resizable. Additional Windows Enhancements in Kubernetes 1.16 Workload identity option for Windows containers has moved to beta. It can now gain exclusive access to external resources. New alpha support is added for kubeadm which can be used to prepare and add a Windows node to cluster. New plugin support is introduced for CSI in alpha. Interested users can download Kubernetes 1.16 on GitHub. Check out the Kubernetes blog page for more information. Other interesting news in Kubernetes The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed
Read more
  • 0
  • 0
  • 21521

article-image-devops-platform-for-coding-gitlab-reached-more-than-double-valuation-of-2-75-billion-than-its-last-funding-and-way-ahead-of-its-ipo-in-2020
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020

Fatema Patrawala
19 Sep 2019
4 min read
Yesterday, GitLab, a San Francisco based start-up, raised $268 million in a Series E funding round valuing the company at $2.75 billion, more than double of its last valuation. In the Series D round funding of $100 million the company was valued at $1.1 billion; and with today’s announcement, the valuation has more than doubled in less than a year. GitLab provides a DevOps platform for developing and collaborating on code and offers a single application for companies to draft, develop and release code. The product is used by companies like Delta Air Lines Inc., Ticketmaster Entertainment Inc. and Goldman Sachs Group Inc etc. The Series E funding round was led by investors including Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments. GitLab plans to go public in November 2020 According to Forbes, GitLab has already set November 18, 2020 as the date for going public. The company seems to be primed and ready for the eventual IPO. As for the $268 million, it gives the company considerable time ahead of the planned event and also gives the flexibility to choose how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” Sid Sijbrandij, Gitlab co-founder and CEO explained in an interview for TechCrunch. He further adds, that the new funds will be used to add monitoring and security to GitLab’s offering, and to increase the company’s staff to more than 1,000 employees this year from 400 employee strength currently. GitLab is able to add workers at a rapid rate, since it has an all-remote workforce. GitLab wants to be independent and chooses transparency for community Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make the transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working. He reports that the community contributes 200 improvements to the GitLab open-source products every month, and that’s double the amount of just a year ago, so the community is still highly active. He did not ignore the fact that Microsoft acquired GitHub last year for $7.5 billion. And GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase. “Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said. Community is happy with GitLab’s products and services Overall the community is happy with this news and GitLab’s products and services. One of the comments on Hacker News reads, “Congrats, GitLab team. Way to build an impressive business. When anybody tells you there are rules to venture capital — like it’s impossible to take on massive incumbents that have network effects — ignore them. The GitLab team is doing something phenomenal here. Enjoy your success! You’ve earned it.” Another user comments, “We’ve been using Gitlab for 4 years now. What got us initially was the free private repos before github had that. We are now a paying customer. Their integrated CICD is amazing. It works perfectly for all our needs and integrates really easily with AWS and GCP. Also their customer service is really damn good. If I ever have an issue, it’s dealt with so fast and with so much detail. Honestly one of the best customer service I’ve experienced. Their product is feature rich, priced right and is easy. I’m amazed at how the operate. Kudos to the team” Other interesting news in programming Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!
Read more
  • 0
  • 0
  • 21426
article-image-aws-greengrass-machine-learning-edge
Richard Gall
09 Apr 2018
3 min read
Save for later

AWS Greengrass brings machine learning to the edge

Richard Gall
09 Apr 2018
3 min read
AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That's an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry. Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog: "...You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields..." Industrial applications of machine learning inference Machine learning inference is bringing lots of advantages to industry and agriculture. For example: In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  - in turn this will enable corrective action to be taken, allowing farmers to optimize yields. In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you'll be able to identify faulty or failing machines before they actually break. Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally - it means you can run complex models without draining your computing resources. Read more in detail on the AWS Greengrass Developer Guide. AWS Greengrass should simplify machine learning inference One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people. It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT. Further reading: What is edge computing? AWS IoT Analytics: The easiest way to run analytics on IoT data, Amazon says What you need to know about IoT product development
Read more
  • 0
  • 0
  • 21200

article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 21198
Modal Close icon
Modal Close icon