Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-introducing-platform9-managed-kubernetes-service
Amrata Joshi
04 Feb 2019
3 min read
Save for later

Introducing Platform9 Managed Kubernetes Service

Amrata Joshi
04 Feb 2019
3 min read
Today, the team at Platform9, a company known for its SaaS-managed hybrid cloud, introduced a fully managed, enterprise-grade Kubernetes service that works on VMware with full SLA guarantee. It enables enterprises to deploy and run Kubernetes easily without the need of management overhead and advanced Kubernetes expertise. It features enterprise-grade capabilities including multi-cluster operations, zero-touch upgrades, high availability, monitoring, and more, which are handled automatically and backed by SLA. PMK is part of the Platform9’s hybrid cloud solution, which helps organizations in centrally managing VMs, containers and serverless functions on any environment. Enterprises can support Kubernetes at scale alongside their traditional VMs, legacy applications, and serverless functions. Features of Platform9 Managed Kubernetes Self Service, Cloud Experience IT Operations and VMware administrators can now help developers with simple, self-service provisioning and automated management experience. It is now possible to deploy multiple Kubernetes clusters with a click of a button that is operated under the strictest SLAs. Run Kubernetes anywhere PMK allows organizations to run Kubernetes instantly, anywhere. It also delivers centralized visibility and management across all Kubernetes environments including on-premises, public cloud, or at the Edge. This helps the organizations to drop shadow IT and VM/Container sprawl and ensure compliance. It improves utilization and reduces costs across all infrastructure. Speed Platform9 Managed Kubernetes (PMK) allows enterprises to run in less than an hour on VMware. It also eliminates the operational complexity of Kubernetes at scale. PMK helps enterprises to modernize their VMware environments without the need of any hardware or configuration changes. Open Ecosystem Enterprises can benefit from the open source community and all the Kubernetes-related services and applications by delivering open source Kubernetes on VMware without code forks. It ensures portability across environments. Sirish Raghuram, Co-founder, and CEO of Platform9 said, “Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage. We’re proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment.” To learn more about delivering Kubernetes on VMware, check out the demo video. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more
Read more
  • 0
  • 0
  • 16167

article-image-github-open-sources-its-github-load-balancer-glb-director
Savia Lobo
10 Aug 2018
2 min read
Save for later

GitHub open sources its GitHub Load Balancer (GLB) Director

Savia Lobo
10 Aug 2018
2 min read
GitHub, open sourced the GitHub Load Balancer (GLB) Director on August 8, 2018. GLB Director is a Layer 4 load balancer which scales a single IP address across a large number of physical machines. It also minimizes connection disruption during any change in servers. Apart from open sourcing the GLB Director, GitHub has also shared details on the Load balancer design. GitHub had first released its GLB on September 22, 2016. The GLB is GitHub’s scalable load balancing solution for bare metal data centers. It powers a majority of GitHub’s public web and Git traffic, and GitHub’s critical internal systems such as its highly available MySQL clusters. How GitHub Load Balancer Director works GLB Director is designed for use in data center environments where multiple servers can announce the same IP address via BGP. Further, the network routers shard traffic amongst those servers using ECMP routing. The ECMP shards connections per-flow using consistent hashing and by addition or removal of nodes. This will cause some disruption to traffic as the state isn't stored for each flow. A split L4/L7 design is typically used to allow the L4 servers to redistribute these flows back to a consistent server in a flow-aware manner. GLB Director implements the L4 (director) tier of a split L4/L7 load balancer design. The GLB design The GLB Director does not replace services like haproxy and nginx, but rather is a layer in front of these services (or any TCP service) that allows them to scale across multiple physical machines without requiring each machine to have unique IP addresses. Source: GitHub GLB Director only processes packets on ingress. It then encapsulates them inside an extended Generic UDP Encapsulation packet. Egress packets from proxy layer servers are sent directly to clients using Direct Server Return. Read more about the GLB Director in detail on the GitHub Engineering blog post. Microsoft’s GitHub acquisition is good for the open source community Snapchat source code leaked and posted to GitHub Why Golang is the fastest growing language on GitHub GitHub has added security alerts for Python
Read more
  • 0
  • 0
  • 15856

article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 15787

article-image-oracle-releases-open-source-and-commercial-licenses-for-java-11-and-later
Savia Lobo
13 Sep 2018
3 min read
Save for later

Oracle releases open source and commercial licenses for Java 11 and later

Savia Lobo
13 Sep 2018
3 min read
Oracle announced that it will provide JDK releases in two combinations ( an open source license and a commercial license): Under the open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) Under a commercial license for those using the Oracle JDK as part of an Oracle product or service, or who do not wish to use open source software. These combinations will replace the historical BCL(Binary Code License for Oracle Java SE technologies), which had a combination of free and paid commercial terms. The BCL has been the primary license for Oracle Java SE technologies for well over a decade. It historically contained ‘commercial features’ that were not available in OpenJDK builds. However, over the past year, Oracle has contributed features to the OpenJDK Community, which include Java Flight Recorder, Java Mission Control, Application Class-Data Sharing, and ZGC. From Java 11 onwards, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. Minute differences between Oracle JDK 11 and OpenJDK Oracle JDK 11 emits a warning when using the -XX:+UnlockCommercialFeatures option. On the other hand, in OpenJDK builds this option results in an error. This difference remains in order to make it easier for users of Oracle JDK 10 and earlier releases to migrate to Oracle JDK 11 and later. The javac --release command behaves differently for the Java 9 and Java 10 targets. This is because, in those releases the Oracle JDK contained some additional modules that were not part of corresponding OpenJDK releases. Some of them are: javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.web This difference remains in order to provide a consistent experience for specific kinds of legacy use. These modules are either now available separately as part of OpenJFX, are now in both OpenJDK and the Oracle JDK because they were commercial features which Oracle contributed to OpenJDK (e.g., Flight Recorder), or were removed from Oracle JDK 11 (e.g., JNLP). The Oracle JDK always requires third party cryptographic providers to be signed by a known certificate. The cryptography framework in OpenJDK has an open cryptographic interface. This means it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. The Oracle JDK has always required third party cryptographic providers to be signed by a known certificate.  The cryptography framework in OpenJDK has an open cryptographic interface, meaning it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. Read more about this news in detail on Oracle blog. State of OpenJDK: Past, Present and Future with Oracle Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java
Read more
  • 0
  • 0
  • 15770

article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 15659

article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 15373
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-expands-its-machine-learning-hardware-portfolio-with-cloud-tpu-pods-alpha-to-effectively-train-and-deploy-tensorflow-machine-learning-models-on-gcp
Melisha Dsouza
13 Dec 2018
3 min read
Save for later

Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP

Melisha Dsouza
13 Dec 2018
3 min read
Today, Google cloud announced the alpha availability of ‘Cloud TPU Pods’  that are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines, linked via an ultrafast custom interconnect. Google states that these pods make it easier, faster, and more cost-effective to develop and deploy cutting-edge machine learning workloads on Google Cloud. Developers can iterate over the training data in minutes and train huge production models in hours or days instead of weeks. The Tensor Processing Unit (TPU), is an ASIC that powers several of Google’s major products, including Translate, Photos, Search, Assistant, and Gmail. It provides up to 11.5 petaflops of performance in a single pod. Features of Cloud TPU Pods #1 Proven Reference Models Customers can take advantage of  Google-qualified reference models that are optimized for performance, accuracy, and quality for many real-world use cases. These include object detection, language modeling, sentiment analysis, translation, image classification, and more. #2 Connect Cloud TPUs to Custom Machine Types Users can connect to Cloud TPUs from custom VM types. This will them optimally balance processor speeds, memory, and high-performance storage resources for their individual workloads. #3 Preemptible Cloud TPU Preemptible Cloud TPUs are 70% cheaper than on-demand instances. Long training runs with checkpointing or batch prediction on large datasets can now be done at an optimal rate using Cloud TPU’s. #4 Integrated with GCP Cloud TPUs and Google Cloud's Data and Analytics services are fully integrated with other GCP offerings. This provides developers unified access across the entire service line. Developers can run machine learning workloads on Cloud TPUs and benefit from Google Cloud Platform’s storage, networking, and data analytics technologies. #5 Additional features Cloud TPUs perform really well at synchronous training. The Cloud TPU software stack transparently distributes ML models across multiple TPU devices in a Cloud TPU Pod to help customers achieve scalability. All Cloud TPUs are integrated with Google Cloud’s high-speed storage systems, ensuring that data input pipelines can keep up with the TPUs. Users do not have to manage parameter servers, deal with complicated custom networking configurations, or set up exotic storage systems to achieve unparalleled training performance in the cloud. Performance and Cost benchmarking of Cloud TPU Google compared the Cloud TPU Pods and Google Cloud VMs with NVIDIA Tesla V100 GPUs attached- using one of the MLPerf models called TensorFlow 1.12 implementations of ResNet-50 v1.5 (GPU version, TPU version). They trained ResNet-50 on the ImageNet image classification dataset. The results of the test show that Cloud TPU Pods deliver near-linear speedups for large-scale training task; the largest Cloud TPU Pod configuration tested (256 chips) delivers a 200X speedup over an individual V100 GPU. Check out their methodology page for further details on this test. Training ResNet-50 on a full Cloud TPU v2 Pod costs almost 40% less than training the same model to the same accuracy on an n1-standard-64 Google Cloud VM with eight V100 GPUs attached. The full Cloud TPU Pod completes the training task 27 times faster. Head over to Google Cloud’s official page to know more about Cloud TPU Pods. Alternatively, check out Cloud TPU’s documentation for more insights on the same. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?
Read more
  • 0
  • 0
  • 15297

article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 15187

article-image-what-google-redhat-oracle-and-others-announced-at-kubercon-cloudnativecon-2018
Savia Lobo
17 May 2018
6 min read
Save for later

What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018

Savia Lobo
17 May 2018
6 min read
Earlier this month, 4000+ developers attended the Cloud Native Computing Foundation’s flagship event, KubeCon + CloudNativeCon 2018 conference, held at Copenhagen, Europe from May 2nd to 4th. This conference focussed on a series of announcements on microservices, containers, and other open source tools for building applications for the web. Top vendors including Google, RedHat, Oracle, and many more announced a myriad of releases and improvements with respect to Kubernetes. Read our article on Big vendor announcements at KubeCon + CloudNativeCon Europe. Let’s brush through the top 7 vendors and their release highlights in this conference. Google released Stackdriver Kubernetes Monitoring and open sourced gVisor Released in beta, the Stackdriver Kubernetes Monitoring enables both developers and operators to use Kubernetes in a comprehensive fashion and also simplifies operations for them. Features of Stackdriver Kubernetes Monitoring include: Scalable Comprehensive Observability: Stackdriver Kubernetes Monitoring sums up logs, events and metrics from the Kubernetes environment to understand the behaviour of one’s application. These are rich, unified set of signals which are used by developers to build higher quality applications faster. It also helps operators speed root cause analysis and reduce mean time to resolution (MTTR). Seamless integration with Prometheus: The Stackdriver Kubernetes Monitoring integrates seamlessly with Prometheus--a leading Kubernetes open source monitoring approach--without any change. Unified view: Stackdriver Kubernetes Monitoring provides a unified view into signals from infrastructure, applications and services across multiple Kubernetes clusters. With this, developers, operators and security analysts, can effectively manage Kubernetes workloads. This allows them to easily observe system information from various sources, in flexible ways. Some instances include, inspecting a single container, or scaling up to explore massive, multi-cluster deployments. Get started on-cloud or on-premise easily: Stackdriver Kubernetes Monitoring is pre-integrated with Google Kubernetes Engine. Thus, one can immediately use it within their Kubernetes Engine workloads. It is easily integrated with Kubernetes deployments on other clouds or on-premise infrastructure. Hence, one can access a unified collection of logs, events, and metrics for their application, regardless of where the containers are deployed. Also, Google has open-sourced gVisor, a sandboxed container runtime. gVisor, which is lighter than a Virtual machine, enables secure isolation for containers. It also integrates with Docker and Kubernetes and thus makes it simple to run sandboxed containers in production environments. gVisor is written in Go to avoid security pitfalls that can plague kernels. RedHat shared an open source toolkit called Operator Framework RedHat in collaboration with Kubernetes open source community has shared the Operator Framework to make it easy to build Kubernetes applications. The Operator Framework is an open source toolkit designed in order to manage Kubernetes native applications named as Operators in an effective, automated and scalable manner. The Operator Framework comprises of an: Operator SDK that helps developers in building Operators based on their expertise. This does not require any knowledge of the complexities of Kubernetes API. Operator Lifecycle Manager which supervises the lifecycle of all the operators running across a kubernetes cluster. It also keep a check on the services associated with the operators. Operator Metering, which is soon to be added, allows creating a usage report for Operators providing specialized services. Oracle added new open serverless support and key Kubernetes features to Oracle Container Engine According to a report, security, storage and networking are the major challenges that companies face while working with containers. In order to address these challenges, the Oracle Container Engine have proposed some solutions, which include getting new governance, compliance and auditing features such as Identity and Access Management, role-based access control, support for the Payment Card Industry Data Security Standard, and cluster management auditing capabilities. Scalability features: Oracle is adding support for small and virtualized environments, predictable IOPS, and the ability to run Kubernetes on NVIDIA Tesla GPUs. New networking features: These include load balancing and virtual cloud network. Storage features: The company has added the OCI volume provisioner and flexvolume driver. Additionally, Oracle Container Engine features support for Helm and Tiller, and the ability to run existing apps with Kubernetes. Kublr announced that its version 1.9 provides easy configuration of Kubernetes clusters for enterprise users Kublr unleashed an advanced configuration capability in its version 1.9. This feature is designed to provide customers with flexibility that enables Kubernetes clusters to meet specific use cases. The use cases include: GPU-enabled nodes for Data Science applications Hybrid clusters spanning data centers and clouds, Custom Kubernetes tuning parameters, and Meeting other advanced requirements. New features in the Kublr 1.9 include: Kubernetes 1.9.6 and new Dashboard Improved backups in AWS with full cluster restoration An introduction to Centralized monitoring, IAM, Custom cluster specification Read more about Kublr 1.9 on Kublr blog. Kubernetes announced the availability of Kubeflow 0.1 Kubernetes brought forward a power-packed package for tooling, known as Kubeflow 0.1. Kubeflow 0.1 provides a basic set of packages for developing, training, and deploying machine learning models. This package: Supports Argo, for managing ML workflows Offers Jupyter Hub to create interactive Jupyter notebooks for collaborative and interactive model training. Provides a number of TensorFlow tools, which includes Training Controller for native distributed training. The Training Controller can be configured to CPUs or GPUs and can also be adjusted to fit the size of a cluster by a single click. Additional features such as a simplified setup via bootstrap container, improved accelerator integration, and support for more ML frameworks like Spark ML, XKGBoost, and sklearn will be released soon in the 0.2 version of KubeFlow. CNCF(Cloud Native Computing Foundation) announced a new Certified Kubernetes Application Developer program The Cloud Native Computing Foundation has successfully launched the Certified Kubernetes Application Developer (CKAD) exam and corresponding Kubernetes for Developers course. The CKAD exam certifies that users are fit to design, build, configure, and expose cloud native applications on top of Kubernetes. A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. Read more about this program on the Cloud Native Computing Foundation blog. DigitalOcean launched managed Kubernetes service DigitalOcean cloud computing platform launched DigitalOcean Kubernetes, which is a simple and cost-effective solution for deploying, orchestrating, and managing container workloads on cloud. With the DigitalOcean Kubernetes service, developers can save time and deploy their container workloads without the need to configure things from scratch. The organization has also provided an early access to this Kubernetes service. Read more on the DigitalOcean blog. Apart, from these 7 vendors, many others such as DataDog, Humio, Weaveworks and so on have also announced features, frameworks, and services based on Kubernetes, serverless, and cloud computing. This is not the end to the announcements, read the KubeCon + CloudNativeCon 2018 website to know about other announcements rolled out in this event. Top 7 DevOps tools in 2018 Apache Spark 2.3 now has native Kubernetes support! Polycloud: a better alternative to cloud agnosticism
Read more
  • 0
  • 0
  • 15134

article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 15119
article-image-microsoft-invests-in-grab-together-aim-to-conquer-the-southeast-asian-on-demand-services-market-with-azures-intelligent-cloud
Natasha Mathur
09 Oct 2018
2 min read
Save for later

Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud

Natasha Mathur
09 Oct 2018
2 min read
Microsoft announced, yesterday, that it is collaborating with Grab, the leading on-demand transportation, mobile payments and online-to-offline services platform in Southeast Asia, as part of a strategic cloud partnership. The partnership aims to transform the delivery of digital services and mobility by using Microsoft’s state-of-the-art expertise in machine learning and other artificial intelligence (AI) capabilities. “Our partnership with Grab opens up new opportunities to innovate in both a rapidly evolving industry and growth region. We’re excited to team up to transform the customer experience as well as enhance the delivery of digital services for the millions of users who rely on Grab for safe and affordable transport, food and package delivery, mobile payments, and financial services”, mentioned Peggy Johnson, executive vice president at Microsoft. Grab is a Singapore-based technology company delivering ride-hailing, ride sharing, and logistics services via its app in Singapore and neighboring Southeast Asian nations. It currently operates in 235 cities across eight Southeast Asian countries.  Moreover, Grab’s digital wallet, GrabPay, is the top player in Southeast Asia. This partnership is expected to help both companies explore a wide range of innovative deep technology projects such as mobile facial recognition with built-in AI for drivers and customers, using Microsoft Azure’s fraud detection services to prevent fraudulent transactions on Grab’s platform, and so on. These projects aim to transform the experience for Grab’s users, driver-partners, merchants as well as agents. Grab will be adopting Microsoft Azure as its preferred cloud platform and Microsoft is set to make a strategic investment in Grab; the magnitude of which currently undisclosed. “As a global technology leader, Microsoft’s investment into Grab highlights our position as the leading homegrown technology player in the region. We look forward to collaborating with Microsoft in the pursuit of enhancing on-demand transportation and seamless online-to-offline experiences for users”, said Ming Maa, president of Grab. There are a few other areas of collaborations between Grab and Microsoft. These include Microsoft Outlook integration, Microsoft Kaizala, In-car solutions, and integration of Microsoft Rewards Gift Cards. For more information, check out the official Microsoft blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Microsoft’s new neural text-to-speech service lets machines speak like people
Read more
  • 0
  • 0
  • 15063

article-image-debian-project-leader-elections-goes-without-nominations
Fatema Patrawala
13 Mar 2019
5 min read
Save for later

Debian project leader elections goes without nominations. What now?

Fatema Patrawala
13 Mar 2019
5 min read
The Debian Project is an association of individuals who have made common cause to create a free operating system. One of the traditional rites of the northern hemisphere spring is the elections for the Debian project leader. Over a six-week period in the month of March they hold the elections, interested candidates put their names forward, describe their vision for the project as a whole, answer questions from Debian developers, then wait and watch while the votes come in. But what would happen if Debian were to hold an election and no candidates stepped forward? The Debian project has just found itself in that situation this year and is trying to figure out what will happen next. The Debian project scatters various types of authority widely among its members, leaving relatively little for the project leader. As long as they stay within the bounds of Debian policy, individual developers have nearly absolute control over the packages they maintain, for example: Difficult technical disagreements between developers are handled by the project's technical committee. The release managers and FTP masters make the final decisions on what the project will actually ship (and when). The project secretary ensures that the necessary procedures are followed. The policy team handles much of the overall design for the distribution. So, in a sense, there is relatively little leading left for the leader to do. The roles that do fall to the leader fit into a couple of broad areas; the first of those is representing the project to the rest of the world. The leader gives talks at conferences and manages the project's relationships with other groups and companies. The second role is, to a great extent, administrative: the leader manages the project's money appoints developers to other roles within the project and takes care of details that nobody else in the project is responsible for Leaders are elected to a one-year term; for the last two years, this position has been filled by Chris Lamb. The February "Bits from the DPL" by Chris gives a good overview of what sorts of tasks the leader is expected to carry out. The Debian constitution describes the process for electing the leader. Six weeks prior to the end of the current leader's term, a call for candidates goes out. Only those recognized as Debian developers are eligible to run; they get one week to declare their intentions. There follows a three-week campaigning period, then two weeks for developers to cast their votes. This being Debian, there is always a "none of the above" option on the ballot; should this option win, the whole process restarts from the beginning. This year, the call for nominations was duly sent out by project secretary Kurt Roeckx on March 3. But, as of March 10, no eligible candidates had put their names forward. Lamb has been conspicuous in his absence from the discussion, with the obvious implication that he does not wish to run for a third term. So, it would seem, the nomination period has come to a close and the campaigning period has begun, but there is nobody there to do any campaigning. This being Debian, the constitution naturally describes what is to happen in this situation: the nomination period is extended for another week. Any Debian developers who procrastinated past the deadline now have another seven days in which to get their nominations in; the new deadline is March 17. Should this deadline also pass without candidates, it will be extended for another week; this loop will repeat indefinitely until somebody gives in and submits their name. Meanwhile, though, there is another interesting outcome from this lack of candidacy: the election of a new leader, whenever it actually happens, will come after the end of Lamb's term. There is no provision for locking the current leader in the office and requiring them to continue carrying out its duties; when the term is done, it's done. So the project is now certain to have a period of time where it has no leader at all. Some developers seem to relish this possibility; one even suggested that a machine-learning system could be placed into that role instead. But, as Joerg Jaspert pointed out: "There is a whole bunch of things going via the leader that is either hard to delegate or impossible to do so". Given enough time without a leader, various aspects of the project's operation could eventually grind to a halt. The good news is that this possibility, too, has been foreseen in the constitution. In the absence of a project leader, the chair of the technical committee and the project secretary are empowered to make decisions — as long as they are able to agree on what those decisions should be. Since Debian developers are famously an agreeable and non-argumentative bunch, there should be no problem with that aspect of things. In other words, the project will manage to muddle along for a while without a leader, though various aspects of processes could slow down and become more awkward if the current candidate drought persists. One might well wonder, though, why there seems to be nobody who wants to take the helm of this project for a year. Could the fact that it is an unpaid position requiring a lot of time and travel have something to do with it? If that were indeed to prove to be part of the problem, Debian might eventually have to consider doing what a number of similar organizations have done and create a paid position to do this work. Such a change would not be easy to make. But, if the project finds itself struggling to find a leader every year, it's a discussion that may need to happen. Are Debian and Docker slowly losing popularity? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw  
Read more
  • 0
  • 0
  • 15029

article-image-introducing-gitlab-serverless-to-deploy-cloud-agnostic-serverless-functions-and-applications
Amrata Joshi
12 Dec 2018
2 min read
Save for later

Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, GitLab and TriggerMesh introduced GitLab Serverless which helps enterprises run serverless workloads on any cloud with the help of Google’s Kubernetes-based platform Knative, which is used to build, deploy, and manage serverless workloads. GitLab Serverless enables businesses in deploying serverless functions and applications on any cloud or infrastructure from GitLab UI by using Knative. GitLab Serverless is scheduled for public release on 22 December 2018 and will be available in GitLab 11.6. It involves a technology developed by TriggerMesh, a multi cloud serverless platform, for enabling businesses to run serverless workloads on Kubernetes. Sid Sijbrandij, co-founder and CEO of GitLab said, “We’re pleased to offer cloud-agnostic serverless as a built-in part of GitLab’s end-to-end DevOps experience, allowing organizations to go from planning to monitoring in a single application.” Functions as a service (Faas) With GitLab Serverless, users can run their own Function-as-a-Service (FaaS) on any infrastructure without worrying about vendor lock-in. FaaS allows users to write small and discrete units of code with event-based execution. While deploying the code, developers need not worry about the infrastructure it will run on. It saves resources as the code executes only when needed, so resources don’t get used while the app is idle. Kubernetes and Knative Flexibility and portability can be achieved by running serverless workloads on Kubernetes. The Serverless uses Knative for creating a seamless experience for the entire DevOps lifecycle. Deploy on any infrastructure With Serverless, users can deploy to any cloud or on-premises infrastructure. GitLab can connect to any Kubernetes cluster so users can choose to run their serverless workloads anywhere Kubernetes runs. Auto-scaling with ‘scale to zero’ The Kubernetes cluster automatically scales up and down based on the load. The "Scale to zero" is used for stopping consumption of resources when there are no requests. To know more about this news, check out the official announcement. Haskell is moving to GitLab due to issues with Phabricator GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 14993
article-image-vmware-essential-pks-use-upstream-kubernetes-to-build-a-flexible-cost-effective-cloud-native-platform
Melisha Dsouza
04 Mar 2019
3 min read
Save for later

VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform

Melisha Dsouza
04 Mar 2019
3 min read
Last week, Paul Fazzone, GM Cloud Native Applications, announced the launch of VMware Essential PKS “as a modular approach to cloud-native operation”. VMware Essential PKS includes upstream Kubernetes, reference architectures to help design decisions, and expert support to guide users through upgrades, maintenance and reactively troubleshoot when needed. Paul notes that more than 80% of containers run on virtual machines (VMs), with the percentage growing every year. This launch keeps up with the main objective of establishing VMware as the leading enabler of Kubernetes and cloud-native operation. Features of Essential PKS #1 Modular Approach Customers who have specific technological requirements for networking, monitoring, storage, etc. can build a more modular architecture on upstream Kubernetes. VMware Essential PKS will help these customers access upstream Kubernetes with proactive support.  The only condition being that these organizations should either have the in-house expertise to work with those components, the intention to grow that capability or the willingness to use an expert team. #2 Application portability Customers will be able to use the latest version of upstream Kubernetes, ensuring that they are never locked into a vendor-specific distribution. #3 Flexibility This service allows customers to implement a multi-cloud strategy that lets them choose tools and clouds as per their preference to build a flexible platform on upstream Kubernetes for their workloads. #4  Open-source community support VMware contributes to multiple SIGs and open-source projects that strengthen key technologies and fill up the gaps in the Kubernetes ecosystem. #5 Cloud native ecosystem support and guidance Customers will be able to access 24x7, SLA-driven support for Kubernetes and key open-source tooling. VMware experts will partner with customers to help them with architecture design reviews and help them evaluate networking, monitoring, backup, and other solutions to build a production-grade open source Kubernetes platform. The Kubernetes community has received this news with enthusiasm. https://twitter.com/cmcluck/status/1100506616124719104 https://twitter.com/edhoppitt/status/1100444712794615808 In November, VMware announced it was buying Heptio at VMworld. Heptio products work with upstream Kubernetes and help enterprises realize the impact of Kubernetes on their business. According to FierceTelecom, “PKS Essentials takes the Heptio approach of building a more modular, customized architecture for deploying software containers on upstream Kubernetes but with VMware support.” Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration
Read more
  • 0
  • 0
  • 14821

article-image-cloud-filestore-a-new-high-performance-storage-option-by-google-cloud-platform
Vijin Boricha
27 Jun 2018
3 min read
Save for later

Cloud Filestore: A new high performance storage option by Google Cloud Platform

Vijin Boricha
27 Jun 2018
3 min read
Google recently came up with a new storage option for developers in its cloud. Cloud Filestore which is in its beta will launch next month according to the Google Cloud Platform Blog. Applications that require a filesystem interface and a shared filesystem for data can leverage this file storage service. It provides a fully managed  Network Attached Storage (NAS) service to effectively integrate with Google Compute Engine and Kubernetes Engine instances. Developers can leverage the abilities of Filestore for high performing file-based workloads. Now enterprises can easily run applications that depend on traditional file system interface with Google Cloud Platform. Traditionally, if applications needed a standard file system, developers would have to improvise a file server with a persistent disk. Filestore does away with traditional methods and allows GCP developers to spin-up storage as needed. Filestore offers high throughput, low latency and high IOPS (Input/output operations per second). This service is available in two tiers; premium and standard. The premium tier costs $0.30/GB/month and promises a max throughput of 700 MB/s and 30,000 max IOPS. The standard tier costs $0.20/GB/month with 180 MB/s max throughput and 5,000 max IOPS. A snapshot of Filestore features Filestore was introduced at the Los Angeles region launch and majorly focused on the entertainment and media industries, where there is a great need for shared file systems for enterprise applications. But this service is not limited only to the media industry, other industries that rely on similar enterprise applications can also benefit from this service. Benefits of using Filestore A lightning speed experience Filestore provides high IOPS for latency sensitive workloads such as content management systems, databases, random i/o, or other metadata intensive applications. This further results in a minimal variability in performance. Consistent  performance throughout Cloud Filestore ensures that one pays a predictable price for predictable performance. Users can independently choose the preferred IOPS--standard or premium-- and storage capacity with Filestore. With this option to choose from, users can fine tune their filesystem for a particular workload. One will also experience consistent performance for a particular workload over time. Simplicity at its best Cloud Filestore, a fully managed, NoOps service, is integrated with the rest of the Google Cloud portfolio. One can easily mount Filestore volumes on Compute Engine VMs. Filestore is tightly integrated with Google Kubernetes Engine, which allows containers to refer the same shared data. To know more about this exciting release, visit Cloud Filestore official website. Related Links AT&T combines with Google cloud to deliver cloud networking at scale What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 14714
Modal Close icon
Modal Close icon