Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-facebook-open-sources-a-set-of-linux-kernel-products-including-bpf-btrfs-cgroup2-and-others-to-address-production-issues
Bhagyashree R
31 Oct 2018
3 min read
Save for later

Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues

Bhagyashree R
31 Oct 2018
3 min read
Yesterday, Facebook open sourced a suite of Linux kernel components and tools. This suite includes products that can be used for resource control and utilization, workload isolation, load balancing, measuring, monitoring, and much more. Facebook has already started using these products on a massive scale throughout its infrastructure and many other organizations are also adopting them. The following are some of the products that they have open sourced: Berkeley Packet Filter (BPF) BPF is a highly-flexible Linux kernel code execution engine. It enables safe and easy modifications of kernel behaviors with custom code by allowing bytecode to run at various hook points. Currently, it is being widely used for networking, tracing and security in a number of Linux kernel subsystems. What can you do with it? You can extend the Linux kernel behavior for a variety of purposes such as load balancing, container networking, kernel tracing, monitoring, and security. You can solve those production issues where user-space solutions alone aren’t enough by executing the user-space code in the kernel. Btrfs Btrfs is a copy-on-write (CoW) filesystem, which means that instead of overwriting in one place, all the updates to metadata or file data are written to a new location on the disk. Btrfs mainly focuses on fault tolerance, repair, and easy administration. It supports features such as snapshots, online defragmentation, pooling, and integrated multiple device support. It is the only filesystem implementation that works with resource isolation. What can you do with it? You can address and manage large storage subsystems by leveraging features like snapshots, load balancing, online defragmentation, pooling, and integrated multiple device support. You can manage, detect, and repair errors with data and metadata checksums, mirroring, and file self-healing. Netconsd (Netconsole daemon) Netconsd is a UDP-based daemon that provides lightweight transport for Linux netconsole messages. It receives and processes log data from the Linux kernel and serves it up as a structured data. Simply put, it is a kernel module that sends all kernel log messages over the network to another computer, without involving user space. What can you do with it? Detect, reorder, or request retransmission of missing messages with the provided metadata. Extract meaningful signal from the data logged by netconsd to rapidly identify and diagnose misbehaving services. Cgroup2 Cgroup2 is a Linux kernel feature that allows you to group and structure workloads and also control the amount of system resources assigned to each group. It consists of controllers for memory, I/O, central processing unit, and more. Using cgroup2, you can isolate workloads, prioritize, and configure the distribution of resources. What can you do with it? You can create isolated groups of processes and then control and measure the distribution of memory, IO, CPU and other resources for each group. You can detect resource shortages using PSI pressure metrics for memory, IO, and CPU with cgroup2. With cgroup2, production engineers will be able to deal with increasing resource pressure more proactively and prevent conflicts between workloads. Along with these products, they have open-sourced Pressure Stall Information (PSI), oomd, and many others. You can find the complete list of these products at Facebook Open Source website and also check out the official announcement. Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind” Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others
Read more
  • 0
  • 0
  • 11843

article-image-what-to-expect-from-upcoming-ubuntu-18-04-release
Gebin George
20 Apr 2018
2 min read
Save for later

What to expect from upcoming Ubuntu 18.04 release

Gebin George
20 Apr 2018
2 min read
Ubuntu 18.04 official release is scheduled on April 26th 2018. Ubuntu 17.10 was released in October 2017 and within a span of 6 months, they are releasing their next big update in 18.04. The version numbers of Ubuntu have an interesting trait, where-in 18.04 will be released in the 4th month of 2018, similar to 17.10, which was released in 10th month of 2017. Ubuntu 18.04 comes with some exciting new feature releases: Extending support to color emojis All the previous versions of Ubuntu supported monochrome black and white emojis, which definitely lacked aesthetic appeal. This update might not be at the top of wishlist for anyone using Ubuntu, but emojis form an integral part of modern communication, and also comparing it to other distros like Fedora, which gained color emoji support long back. With 18.04 release, you can add and view color emojis, anytime, anywhere. The release uses Noto Color emoji font, which can be downloaded from the GitHub page. Shipping with Linux Kernel 4.15 Ubuntu 18.04 now ships with the slowest Linux kernel ever since 2011 i.e Kernel 4.15. This brings in much-needed Spectre and Meltdown patch fixes for Ubuntu 18.04. Furthermore, it has also added native-support for Raspberry Pi touchscreen, and has a significant performance boost for AMD GPUs. GNOME 3.28 Unity desktop environment is no longer the default environment anymore, since the release of customized GNOME in Ubuntu 17.10 release. They are planning to continue with it and plotting the latest version of GNOME (3.28) along with 18.04. Xorg display server Wayland was introduced as the default display server for Ubuntu along with the 17.10 release. But it has turned out to be an issue as a decent amount of applications were not supported on Wayland. Hence, in the new release Ubuntu is switching back to Xorg display server as the default option and wayland will be provided as an option to the users. Increase in boot speed Canonical, the company behind Ubuntu, has claimed that Ubuntu 18.04 will have a better boot speed as the systemd’s features will help identifying the bottleneck and solve them as quickly as possible. New installer for the server edition Ubuntu was using their debian text-based installer for their server edition but with the 18.04 release, server edition will be using the all new subiquity installer. Checkout the GitHub page for more about subiquity installer. For minor bug fixes, features and enhancements, refer to the FOSSBYTES blog.
Read more
  • 0
  • 0
  • 11834

article-image-red-hat-infrastructure-migration-solution-for-proprietary-and-siloed-infrastructure
Savia Lobo
24 Aug 2018
3 min read
Save for later

Red Hat infrastructure migration solution for proprietary and siloed infrastructure

Savia Lobo
24 Aug 2018
3 min read
Red Hat recently introduced its infrastructure migration solution to help provide an open pathway to digital transformation. Red Hat infrastructure migration solution provides an enterprise-ready pathway to cloud-native application development via Linux containers, Kubernetes, automation, and other open source technologies. It helps organizations to accelerate transformation by more safely migrating and managing workload to an open source infrastructure platform, thus reducing cost and speeding innovation. Joe Fernandes, Vice President, Cloud Platforms Products at Red Hat, said, “Legacy virtualization infrastructure can serve as a stumbling block too, rather than a catalyst, for IT innovation. From licensing costs to closed vendor ecosystems, these silos can hold organizations back from evolving their operations to better meet customer demand. We’re providing a way for enterprises to leapfrog these legacy deployments and move to an open, flexible, enterprise platform, one that is designed for digital transformation and primed for the ecosystem of cloud-native development, Kubernetes, and automation.” RedHat program consists of three phases: Discovery Session: Here, Red Hat Consulting will engage with an organization in a complimentary Discovery Session to better understand the scope of the migration and document it effectively. Pilot Migrations: In this phase, an open source platform is deployed and operationalized using Red Hat’s hybrid cloud infrastructure and management tooling. Pilot migrations are carried out to demonstrate typical approaches, establish initial migration capability, and define the requirements for a larger scale migration. Migration at scale: In this phase, IT teams are able to migrate workloads at scale. Red Hat Consulting also aids in better streamline operations across virtualization pool, and navigate complex migration cases. Post the Discovery Session, recommendations are provided for a more flexible open source virtualization platform based on Red Hat technologies. These include: Red Hat Virtualization offers an open software-defined infrastructure and centralized management platform for virtualized Linux and Windows workloads. It is designed to empower customers with greater efficiency for traditional workloads, along with creating a launchpad for cloud-native and container-based application innovation. Red Hat OpenStack Platform is built on the enterprise-grade backbone of Red Hat Enterprise Linux. It helps users to build an on-premise cloud architecture that provides resource elasticity, scalability, and increased efficiency. Red Hat Hyperconverged Infrastructure is a portfolio of solutions that includes Red Hat Hyperconverged Infrastructure for both Virtualization and Cloud. Customers can use it to integrate compute, network and storage in a form factor designed to provide greater operational and cost efficiency. Using the new migration capabilities based on Red Hat’s management technologies, including Red Hat Ansible Automation, new workloads can be delivered in an automated fashion with self-service. These can also enable IT to more quickly re-create workload across hybrid and multi-cloud environment. Read more about the Red Hat infrastructure migration solution on RedHat’s official blog. Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 11813

article-image-opensuse-may-go-independent-from-suse-reports-lwn-net
Vincy Davis
03 Jun 2019
3 min read
Save for later

OpenSUSE may go independent from SUSE, reports LWN.net

Vincy Davis
03 Jun 2019
3 min read
Lately, the relationship between SUSE and openSUSE community has been under discussion. Different options are being considered, among which the possibility of setting up openSUSE into an entirely independent foundation is gaining momentum. This will enable openSUSE to have greater autonomy and control over its own future and operations. Though openSUSE board chair Richard Brown and SUSE leadership have publicly reiterated that SUSE remains committed to openSUSE. There has been a lot of concern over the ability of openSUSE to be able to operate in a sustainable way, without being entirely beholden to SUSE. The idea of an independent openSUSE foundation has popped up many times in the past. Former openSUSE board member Peter Linnell says, “Every time, SUSE has changed ownership, this kind of discussion pops up with some mild paranoia IMO, about SUSE dropping or weakening support for openSUSE”. He also adds, “Moreover, I know SUSE's leadership cares a lot about having a healthy independent openSUSE community. They see it as important strategically and the benefits go both ways.” On the contrary, openSUSE Board member Simon Lees says, “it is almost certain that at some point in the future SUSE will be sold again or publicly listed, and given the current good working relationship between SUSE and openSUSE it is likely easier to have such discussions now vs in the future should someone buy SUSE and install new management that doesn't value openSUSE in the same way the current management does.” In an interview with LWN, Brown described the conversation between SUSE and the broader community, about the possibility of an independent foundation, as being frank, ongoing, and healthy. He also mentioned that everything from a full independent openSUSE foundation to a tweaking of the current relationship that provides more legal autonomy for openSUSE can be considered. Also, there is a possibility for some form of organization to be run under the auspices of the Linux Foundation. Issues faced by openSUSE Brown has said, “openSUSE has multiple stakeholders, but it currently doesn't have a separate legal entity of its own, which makes some of the practicalities of having multiple sponsors rather complicated”. Under the current arrangement, it is difficult for OpenSUSE to directly handle financial contributions. Sponsorship and the ability to raise funding have become a prerequisite for the survival of openSUSE. Brown comments, “openSUSE is in continual need of investment in terms of both hardware and manpower to 'keep the lights on' with its current infrastructure”. Another concern has been the tricky collaboration between the community and the company across all SUSE products. In particular, Brown has stated issues with the openSUSE Kubic and SUSE Container-as-a-Service Platform. With a more distinctly separate openSUSE, the implication and the hope is that openSUSE projects will have increased autonomy over its governance and interaction with the wider community. According to LWN, if openSUSE becomes completely independent, it will have increased autonomy over its governance and interaction with the wider community. Though different models for openSUSE's governance are under consideration, Brown has said, “The current relationship between SUSE and openSUSE is unique and special, and I see these discussions as enhancing that, and not necessarily following anyone else's direction”. There has also been no declaration of any hard deadline in place. For more details, head over to LWN article. SUSE is now an independent company after being acquired by EQT for $2.5 billion 389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings Salesforce open sources ‘Lightning Web Components framework’
Read more
  • 0
  • 0
  • 11771

article-image-google-announces-the-beta-version-of-cloud-source-repositories
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Google announces the Beta version of Cloud Source Repositories

Melisha Dsouza
21 Sep 2018
3 min read
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud. The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search. These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft. How does Google code search work? Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field. Solution to common code search challenges #1 To execute searches across all the code at ones’ company If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster. #2 To search for code that performs a common operation Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code. #3 If a developer cannot remember the right way to use a common code component Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers. #4 Issues with production application  If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered. All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users. You can read more about Cloud Source Repositories in the official documentation. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Google to allegedly launch a new Smart home device Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 11690

article-image-aws-sam-aws-serverless-application-model-is-now-open-source
Savia Lobo
24 Apr 2018
2 min read
Save for later

AWS SAM (AWS Serverless Application Model) is now open source!

Savia Lobo
24 Apr 2018
2 min read
AWS recently announced that  SAM (Serverless Application Model) is now open source. With AWS SAM, one can define serverless applications in a simple and clean syntax. The AWS Serverless Application Model extends AWS CloudFormation and provides a simplified way of defining the Amazon Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. AWS SAM comprises of: the SAM specification Code translating the SAM templates into AWS CloudFormation Stacks General Information about the model Examples of common applications The SAM specification and implementation are open sourced under the Apache 2.0 license for AWS partners and customers to adopt and extend within their own toolsets. The current version of the SAM specification is available at AWS SAM 2016-10-31. Basic steps to create a serverless application with AWS SAM Step 1: Create a SAM template, a JSON or YAML configuration file that describes Lambda functions, API endpoints and the other resources in your application. Step 2: Test, upload, and deploy the application using the SAM Local CLI. During deployment, SAM automatically translates the application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to set-up for any Lambda functions. To learn more about how to define and deploy serverless applications, read the How-To Guide and see examples. One can build serverless applications faster and further simplify one’s development of serverless applications by defining new event sources, new resource types, and new parameters within SAM. One can also modify SAM in order to integrate it with other frameworks and deployment providers from the community for building serverless applications. For more in-depth knowledge, read AWS SAM development guide on GitHub  
Read more
  • 0
  • 0
  • 11673
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 11646

article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 11578

article-image-cncf-accepts-cloud-native-buildpacks-to-the-cloud-native-sandbox
Sugandha Lahoti
04 Oct 2018
2 min read
Save for later

CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox

Sugandha Lahoti
04 Oct 2018
2 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) accepted Cloud Native Buildpacks (CNB) into the CNCF Sandbox. With this collaboration, Buildpacks will be able to leverage the vendor neutrality of CNCF to leverage cloud native virtues. The Cloud Native Buildpacks project was initiated by Pivotal and Heroku in January 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract. This project incorporates learnings from maintaining production-grade buildpacks at both Pivotal and Heroku. What are Cloud Native Buildpacks? At the high level, Cloud Native Buildpacks turn source code into production ready Docker images that are OCI image compatible. This gives users more options to customize runtime while making their apps portable. Buildpacks minimize initial time to production thus reducing the operational burden on developers, and supports enterprise operators who manage apps at scale. Buildpacks were first created by Heroku in 2011. Since then, they have been adopted by Cloud Foundry as well as Gitlab, Knative, Microsoft, Dokku, and Drie. The Buildpack API was open sourced in 2012 with Heroku-specific elements removed. This was done to make sure that each vendor that adopted buildpacks evolved the API independently, which led to isolated ecosystems. As a part of the Cloud Native Sandbox project, the Buildpack API is standardized for all platforms. They are also opening the tooling they work with and will run buildpacks under the Buildpack GitHub organization. “Anyone can create a buildpack for any Linux-based technology and share it with the world. Buildpacks’ ease of use and flexibility are why millions of developers rely on them for their mission critical apps,” said Joe Kutner, architect at Heroku. “Cloud Native Buildpacks will bring these attributes inline with modern container standards, allowing developers to focus on their apps instead of their infrastructure.” Developers can start using Cloud Native Buildpacks by forking one of the Buildpack Samples. You can also read up on the implementation specifics laid out in the Buildpack API documentation. CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project.
Read more
  • 0
  • 0
  • 11551

article-image-redhats-operatorhub-io-makes-it-easier-for-kuberenetes-developers-and-admins-to-find-pre-tested-operators-for-applications
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications

Melisha Dsouza
01 Mar 2019
2 min read
Last week, Red Hat launched OperatorHub.io in collaboration with Microsoft, Google Cloud, and Amazon Web Services, as a “public registry” for finding services backed by the Kubernetes Operator. According to the RedHat blog, the Operator pattern automates infrastructure and application management tasks using Kubernetes as the automation engine. Developers have shown a growing interest in Operators owing to features like accessing automation advantages of public cloud, enable the portability of the services across Kubernetes environments, and much more. RedHat also comments that the number of Operators available has increased but it is challenging for developers and Kubernetes administrators to find available Operators that meet their quality standards. To solve this challenge, they have come up with OperatorHub.io. Features of OperatorHub.io OperatorHub.io is a common registry to “publish and find available Operators”. This is a curation of Operator-backed services for a base level of documentation. It also includes active communities or vendor-backing to show maintenance commitments, basic testing, and packaging for optimized life-cycle management on Kubernetes. The platform will enable the creation of more Operators as well as an improvement to existing Operators. This is a centralized repository that helps users and the community to organize around Operators. Operators can be successfully listed on OperatorHub.io only when then show cluster lifecycle features and packaging that can be maintained through the Operator Framework’s Operator Lifecycle Management, along with acceptable documentation for intended users. Operators that are currently listed in OperatorHub.io include Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, MongoDB Enterprise Operator and many more. This news has been accepted by the Kubernetes community with much enthusiasm. https://twitter.com/mariusbogoevici/status/1101185896777281536 https://twitter.com/christopherhein/status/1101184265943834624 This is not the first time that RedHat has tried to build on the momentum for the Kubernetes Operators. According to TheNewStack, last year, the company acquired CoreOS last year and went on to release Operator Framework, an open source toolkit that “provides an SDK, lifecycle management, metering, and monitoring capabilities to support Operators”. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover  
Read more
  • 0
  • 0
  • 11522
article-image-google-ibm-redhat-and-others-launch-istio-1-0-service-mesh-for-microservices
Savia Lobo
01 Aug 2018
3 min read
Save for later

Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices

Savia Lobo
01 Aug 2018
3 min read
Istio, an open-source platform that connects, manages and secures microservices announced its version 1.0. Istio provides service mesh for microservices from Google, IBM, Lyft, Red Hat, and other collaborators from the open-source community. What’s Istio? Popularly known as a service mesh, Istio collects logs, traces, and telemetry and then adds security and policy without embedding client libraries. Istio also acts as a platform which provides APIs that allows integration with systems for logging, telemetry, and policy. Istio also helps in measuring the actual traffic between services including requests per second, error rates, and latency. It also generates a dependency graph to know how services affect one another. Istio offers a helping hand to one’s DevOps team by providing them with tools to run distributed apps smoothly. Here’s a list of what Istio does for your team: Performs Canary rollouts for allowing the DevOps team to smoke-test any new build and ensure a good build performance. Offers fault-injection, retry logic and circuit breaking so that DevOps teams can perform more testing and change network behavior at runtime to keep applications up and running. Istio adds security. It can be used to layer mTLS on every call, adding encryption-in-flight with an ability to authorize every single call on one’s cluster and mesh. What’s new in Istio 1.0? Multi-cluster support for Kubernetes Multiple Kubernetes clusters can now be added to a single mesh, enabling cross-cluster communication and consistent policy enforcement. The multi-cluster support is now in beta. Networking APIs now in beta Networking APIs that enable fine-grained control over the flow of traffic through a mesh are now in Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to control the network topology and meet access security requirements at the edge. Mutual TLS can be easily rolled out incrementally without updating all clients Mutual TLS can now be rolled out incrementally without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments. Istio’s mixer configuration has a support to develop out-of-process adapters Mixer now has support for developing out-of-process adapters. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler. Updated authorization policies Authorization policies which control access to services are now entirely evaluated locally in Envoy increasing their performance and reliability. Recommended Install method Helm chart installation is now the recommended install method offering rich customization options to adopt Istio on your terms. Istio 1.0 also includes performance improvement parameters such as continuous regression testing, large-scale environment simulation, and targeted fixes. Read more in detail about Istio 1.0 in its official release notes. 6 Ways to blow up your Microservices! How to build Dockers with microservices How to build and deploy Microservices using Payara Micro  
Read more
  • 0
  • 0
  • 11513

article-image-googles-second-innings-in-china-exploring-cloud-partnerships-with-tencent-and-others
Bhagyashree R
07 Aug 2018
3 min read
Save for later

Google’s second innings in China: Exploring cloud partnerships with Tencent and others

Bhagyashree R
07 Aug 2018
3 min read
Google with the aims of re-entering the Chinese market, is in talks with the top companies in China like Tencent Holdings Ltd. (the company which owns the popular social media site, WeChat) and Inspur Group. Its aim is to expand their cloud services in the second-largest economy. According to some people who are familiar with the ongoing discussion, the talks began in early 2018 and Google was able to narrow down to three firms in late March. But because of the US - China trade war there is an uncertainty, whether this will materialize or not. Why is Google interested in cloud partnerships with Chinese tech giants? In many countries, Google rents computing power and storage over the internet and sells the G Suite, which includes Gmail, Docs, Drive, Calendar, and more tools for business. These run on their data centers. It wants to collaborate with the domestic data center and server providers in China to run internet-based services as China requires the digital information to be stored in the country. This is the reason why they need to partner with the local players. A tie-up with large Chinese tech firms, like Tencent and Inspur will also give Google powerful allies as it attempts a second innings in China after its earlier exit from the country in 2010. A cloud partnership with China will help them compete with their rivals like Amazon and Microsoft. With Tencent by their side, it will be able to go up against the local competitors including Alibaba Group Holding Ltd. How Google has been making inroads to China in the recent past In December, Google launched its AI China Center, the first such center in Asia, at the Google Developer Days event in Shanghai. In January Google agreed to a patent licensing deal with Tencent Holdings Ltd. This agreement came with an understanding that the two companies would team up on developing future technologies. Google could host services on Tencent’s data centers and the company could also promote its services to their customers. Reportedly, to expand its boundaries to China, Google has agreed upon launching a search engine which will comply with the Chinese cybersecurity regulations. A project code-named Dragonfly has been underway since spring of 2017, and accelerated after the meeting between its CEO Sundar Pichai and top Chinese government official in December 2017. It has  launched a WeChat mini program and reportedly developing an news app for China. It’s building a cloud data center region in Hong Kong this year. Joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo, this will be the sixth GCP region in Asia Pacific. With no official announcements, we can only wait and see what happens in the future. But from the above examples, we can definitely conclude that Google is trying to expand its boundaries to China, and that too in full speed. To know more about this recent Google’s partnership with China in detail, you can refer to the full coverage on the Bloomberg’s report. Google to launch a censored search engine in China, codenamed Dragonfly Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 11505

article-image-google-cloud-introduces-traffic-director-beta-a-networking-management-tool-for-service-mesh
Amrata Joshi
12 Apr 2019
2 min read
Save for later

Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh

Amrata Joshi
12 Apr 2019
2 min read
This week, the team at Google Cloud announced the Beta version of Traffic Director, a networking management tool for service mesh, at the Google Cloud Next. Traffic Director Beta will help network managers understand what’s happening in their service mesh. Service mesh is a network of microservices that creates the applications and the interactions between them. Features of Traffic Director Beta Fully managed with SLA Traffic Director’s production-grade features have 99.99% SLA. Users don’t have to worry about deploying and managing the control plane. Traffic management With the help of Traffic Director, users can easily deploy everything from simple load balancing to advanced features like request routing and percentage-based traffic splitting. Build resilient services Users can keep their service up and running by deploying it across multiple regions as VMs or containers. Traffic Director can be used for delivering global load balancing with automatic cross-region overflow and failover. With Traffic Director users can deploy their service instances in multiple regions while requiring only a single service IP. Scaling Traffic Director handles the growth in deployments and it manages to scale for larger services and installations. Traffic management for open service proxies This management tool provides a GCP (Google Cloud Platform)-managed traffic management control plane for xDSv2-compliant open service proxies like Envoy. Compatible with VMs and containers Users can deploy their Traffic Director-managed VM service and container instances with the help of managed instance groups and network endpoint groups. Supports request routing policies This tool supports routing features like traffic splitting and enables use cases like canarying, URL rewrites/redirects, fault injection, traffic mirroring, and advanced routing capabilities that are based on header values such as cookies. To know more about this news, check out Google Cloud's official page. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more
Read more
  • 0
  • 0
  • 11439
article-image-google-kubernetes-engine-1-10-is-now-generally-available-and-ready-for-enterprise-use
Savia Lobo
04 Jun 2018
3 min read
Save for later

Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use

Savia Lobo
04 Jun 2018
3 min read
Google recently announced that their Google Kubernetes Engine 1.10 is now generally available and is also ready for enterprise use. For a prolonged time, enterprises have faced challenges such as security, networking, logging, and monitoring. With the availability of Kubernetes Engine 1.10, Google has introduced new and exciting features that have a built-in robust security for enterprise use, which are: Shared Virtual Private Cloud (VPC): This enables better control of network resources Regional Persistent Disks and Regional Clusters: These ensure higher-availability and stronger SLAs. Node Auto-Repair GA and Custom Horizontal Pod Autoscaler: These can be used for greater automation. New features in the Google Kubernetes Engine 1.10 Networking One can deploy workloads in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model. This gives you the flexibility to manage access to shared network resources using IAM permissions while still isolating departments. Shared VPC lets organization administrators assign administrative responsibilities, such as creating and managing instances and clusters, to service project admins while maintaining centralized control over network resources like subnets, routers, and firewalls. Shared VPC network in the Kubernetes engine 1.10 Storage This will make it easy to build highly available solutions. The Kubernetes Engine will provide support for the new Regional Persistent Disk (Regional PD). Regional PD enables a persistent network-attached block storage with synchronous replication of data between two zones within a region. One does not have to worry about application-level replication and can take advantage of replication at the storage layer, with the help of Regional PDs. This kind of replication offers a convenient building block using which one can implement highly available solutions on Kubernetes Engine. Reliability Regional clusters, which would be made available in some time soon, allow one to create a Kubernetes Engine cluster with a multi-master, highly-available control plane. This cluster would spread the masters across three zones in a region, which is an important feature for clusters with higher uptime requirements. Regional clusters also offer a zero-downtime upgrade experience when upgrading Kubernetes Engine masters. The node auto-repair feature is now generally available. It monitors the health of the nodes in one’s cluster and repairs nodes that are unhealthy. Auto-scaling In Kubernetes Engine 1.10, Horizontal Pod Autoscaler supports three different custom metrics types in beta: External - For scaling based on Cloud Pub/Sub queue length Pods - For scaling based on the average number of open connections per pod Object - For scaling based on Kafka running in the cluster To know more about the features in detail, visit the Google Blog. Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Kubernetes Containerd 1.1 Integration is now generally available Rackspace now supports Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 11434

article-image-epicor-partners-with-microsoft-azure-to-adopt-cloud-erp
Savia Lobo
29 May 2018
2 min read
Save for later

Epicor partners with Microsoft Azure to adopt Cloud ERP

Savia Lobo
29 May 2018
2 min read
Epicor Software Corporation recently announced its partnership with Microsoft Azure to accelerate its Cloud ERP adoption. This partnership further aims at delivering Epicor’s enterprise solutions on the Microsoft Azure platform. The company plans to deploy its Epicor Prophet 21 enterprise resource planning (ERP) suite on Microsoft Azure. This would enable customers a faster growth and innovation as they look forward to digitally transform their business with the reliable, secure, and scalable features of Microsoft Azure. With the Epicor and Microsoft collaboration customers can now access the power of Epicor ERP and Prophet 21 running on Microsoft Azure. Having Microsoft as a partner, Epicor, Leverages a range of technologies such as Internet of Things (IoT), Artificial Intelligence (AI), and machine learning (ML) to deliver ready-to-use, accurate solutions for mid-market manufacturers and distributors. Plans to explore Microsoft technologies for advanced search, speech-to-text, and other use cases to deliver modern human/machine interfaces that improve productivity for customers. Steve Murphy, CEO, Epicor said that,”Microsoft’s focus on the ‘Intelligent Cloud’ and ‘Intelligent Edge’ complement our customer-centric focus”. He further stated that after looking at several cloud options, they felt Microsoft Azure offers the best foundation for building and deploying enterprise business applications that enables customers’ businesses to adapt and grow. As most of the prospects these days ask about Cloud ERP, Epicor says that by deploying such a model they would be ready to offer their customers the ability to move onto cloud with the confidence that Microsoft Azure offers. Read more about this in detail on Epicor’s official blog. Rackspace now supports Kubernetes-as-a-Service How to secure an Azure Virtual Network What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 11395
Modal Close icon
Modal Close icon