Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Networking

54 Articles
article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 16390

article-image-an-update-on-bcachefs-the-next-generation-linux-filesystem
Melisha Dsouza
03 Dec 2018
3 min read
Save for later

An update on Bcachefs- the “next generation Linux filesystem”

Melisha Dsouza
03 Dec 2018
3 min read
Kent Overstreet announced Bcachefs as “the COW filesystem for Linux that won't eat your data" in 2015. Since then the system has undergone numerous updates and patches to get to be where it is today. On the 1st of December, Overstreet published an update on the problems and improvements that are currently being worked upon in Bcachefs. Status update on Bcachefs After the last update, Overstreet has focussed on two major areas of improvement- atomicity of filesystem operations and non-persistence of allocation information (per bucket sector counts). The filesystem operations that had anything to do with i_nlink were not atomic. On startup, the system would have to scan and recalculate i_nlink and also delete no longer referenced inodes. Also, because of non-persistence of allocation information, on startup, the system would have to recalculate all the accounting disk space. The team has now been able to get everything to be fully atomic except for fallocate/fcollapse/etc. After an unclean shutdown, the only thing to be done is scan the inodes btree for inodes that have been deleted. Erasure coding is about 80% done now in Bcachefs. Overstreet is now focussed on persistent allocation information. This will then allow him to focus on ‘reflink’ which in turn will be useful to the company that's funding bcachefs development. This is because the reflinked extent refcounts will be much too big to keep in memory and hence will l have to be kept in a btree and updated whenever doing extent updates. The infrastructure needed to make that happen also depends on making disk space accounting persistent. After all of these updates, he claims bcachefs will have fast mounts (including after unclean shutdown). He is also working on some improvements to disk space accounting for multi-device filesystems which will lead up to fast mounts after clean Shutdowns. To know if a user can safely mount in degraded mode, they will have to store a list of all the combinations of disks that have data replicated across them (or are in an erasure coded stripe) - without any kind of fixed layout, like regular RAID does. Why should you choose Bcachefs? Overstreet announced that Bcachefs is stable, fast, and has a small and clean code-base, along with  the necessary features to be a modern Linux file-system. It has a long list of features, completed or in progress: Copy on write (COW) - like zfs or btrfs Full data and metadata checksumming Caching Compression Encryption Snapshots Scalable Bcachefs prioritizes robustness and reliability According to Kent, Bcachefs ensures that customers won't lose their data. The Bcachefs is an extension of bcache where the bcache was designed as a caching layer to improve block I/O performance. It uses a solid-state drive as a cache for a (slower, larger) underlying storage device. Mainline bcache is not a typical filesystem but looks like a special kind of block device. It handles the movement of blocks of data between fast and slow storage, ensuring that the most frequently used data is kept on the faster device. bcache manages data in a way that yields high performance while ensuring that no data is ever lost, even when an unclean shutdown takes place. You can head over to LKML.org for more information on this announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 16350

article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 16211

article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 15795

article-image-amazon-launches-vpc-traffic-mirroring-for-capturing-and-inspecting-network-traffic
Amrata Joshi
26 Jun 2019
4 min read
Save for later

Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic

Amrata Joshi
26 Jun 2019
4 min read
Yesterday the team at AWS launched VPC Traffic Mirroring, a new feature that can be used with the existing Virtual Private Clouds (VPCs) for capturing and inspecting network traffic at scale. https://twitter.com/nickpowpow/status/1143550924125868033 Features of VPC Traffic Monitoring Detecting network and responding to attacks Users can now detect network and security anomalies and extract traffic of interest from any workload in a VPC and route it to the detection tools with VPC Traffic Mirroring. Users can now detect and respond to attacks more quickly than with traditional log-based tools. Better network visibility Users can now get the network visibility and control for making better security decisions. Regulatory and compliance requirements It is now possible to meet regulatory and compliance requirements that mandate monitoring, logging, etc. Troubleshooting Users can mirror application traffic internally for testing and troubleshooting and analyze traffic patterns. It is now easy for users to proactively locate choke points that will hamper the performance of the applications. The blog post reads, “You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC.” Mirror traffic from any EC2 instance Users can choose to capture all the traffic or can use filters for capturing the packets that are of particular interest and can limit the number of bytes captured per packet. VPC Traffic Mirroring can be used in a multi-account AWS environment for capturing traffic from VPCs spread across many AWS accounts. Users can now mirror traffic from any EC2 instance powered by the AWS Nitro system. It is now possible to replicate the network traffic from an EC2 instance within their Amazon Virtual Private Cloud (Amazon VPC) and forward that traffic to security and monitoring appliances for use cases such as threat monitoring, content inspection, and troubleshooting. And these appliances can be easily deployed on an individual Amazon EC2 instance or a fleet of instances behind a Network Load Balancer (NLB) with the help of a User Datagram Protocol (UDP) listener. Amazon VPC traffic mirroring also supports traffic filtering and packet truncation, allowing customers to extract only traffic they are interested in monitoring. Improved security VPC Traffic mirroring helps in capturing packets at the Elastic Network Interface (ENI) level that cannot be tampered, thus strengthening security. Users can choose to analyze their network traffic from a wide range of monitoring solutions that are integrated with Amazon VPC traffic mirroring on AWS Marketplace. Key elements for VPC Traffic Mirroring Mirror source It is an AWS network resource within a particular VPC which can be used as the source of traffic. VPC Traffic Mirroring supports Elastic Network Interfaces (ENIs) as mirror sources. Mirror target It is an ENI or Network Load Balancer that works as a destination for the mirrored traffic. The mirror target can be in the same AWS account as the Mirror Source or it can be in a different account for the implementation of the central-VPC model. Mirror filter It is a specification of the inbound or outbound traffic that is to be captured or skipped. It can be used to specify a protocol that ranges for the source, destination ports, and CIDR blocks for the source and destination. Traffic mirror session It is a connection that is between a mirror source and target that uses a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target. VPC Traffic Mirroring is now available and customers can start using it in all commercial AWS Regions except for Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for these regions is still pending and will be added soon, as per the official post. To know more about this news, check out Amazon’s official blog post. Amazon adds UDP load balancing support for Network Load Balancer Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 15636

article-image-after-rhel-8-release-users-awaiting-the-release-of-centos-8
Vincy Davis
10 May 2019
2 min read
Save for later

After RHEL 8 release, users awaiting the release of CentOS 8

Vincy Davis
10 May 2019
2 min read
The release of Red Hat Enterprise Linux 8 (RHEL 8) this week, has made everyone waiting for CentOS 8 rebuild to occur. The release of CentOS 8 will require major overhaul in the installer, packages, packaging, and build systems so that it can work with the newer OS. CentOS 7 was released last year, days after RHEL 7 was released. So far the team at CentOS have made their new build system setup ready, and are currently working on the artwork. But they still need to work on their multiple series of build loops in order to get all of the CentOS 8.0 packages built in a compatible fashion. There will be a an installer update followed by a release candidate(s). Only after all these releases, CentOS 8 will finally be available for it’s users. The RHEL 8 release has made many users excited for the CentOS 8 build. A user on Reddit commented, “Thank you to the project maintainers; while RedHat does release the source code anyone who’s actually compiled from source knows that it’s never push-button easy” Another user added, “Thank you Red Hat! You guys are amazing. The entire world has benefited from your work. I've been a happy Fedora user for many years, and I deeply appreciate how you've made my life better. Thank you for building an amazing set of distros, and thank you for pushing forward many of the huge projects that improve our lives such as Gnome and many more. Thank you for your commitment to open source and for living your values. You are heroes to me” So far, a release date has not been declared for CentOS 8 , but a rough timeline has been shared. To read about the steps needed to make a CentOS rebuild, head over to the CentOS wiki page. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat released RHEL 7.6
Read more
  • 0
  • 0
  • 15620
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-istio-1-3-releases-with-traffic-management-improved-security-and-more
Amrata Joshi
16 Sep 2019
3 min read
Save for later

Istio 1.3 releases with traffic management, improved security, and more!

Amrata Joshi
16 Sep 2019
3 min read
Last week, the team behind Istio, an open-source service mesh platform, announced Istio 1.3. This release makes using the service mesh platform easier for users. What’s new in Istio 1.3? Traffic management In this release, automatic determination of HTTP or TCP has been added for outbound traffic when ports are not correctly named as per Istio’s conventions. The team has added a mode to the Gateway API that is used for mutual TLS operation. Envoy proxy has been improved,  it now checks Envoy’s readiness status. The team has improved the load balancing for directing the traffic to the same region and zone by default. And the Redis load balancer has now defaulted to MAGLEV while using the Redis proxy. Improved security This release comes with trust domain validation for services that use mutual TLS. By default, the server only authenticates the requests from the same trust domain. The team has added SDS (Software Defined Security) support for delivering the private key and certificates to each of the Istio control plane services. The team implemented major security policies including RBAC, directly into Envoy.  Experimental telemetry  In this release, the team has improved the Istio proxy to emit HTTP metrics directly to Prometheus, without the need of istio-telemetry service.  Handles inbound traffic securely Istio 1.3 secures and handles all inbound traffic on any port without the need of containerPort declarations. The team has eliminated the infinite loops that are caused in the IP tables rules when workload instances send traffic to themselves. Enhanced EnvoyFilter API The team has enhanced the EnvoyFilter API so that users can fully customize HTTP/TCP listeners, their filter chains returned by LDS (Listener discovery service ), Envoy HTTP route configuration that is returned by RDS (Route Discovery Service) and much more. Improved control plane monitoring The team has enhanced control plane monitoring by adding new metrics to monitor configuration state, metrics for sidecar injector and a new Grafana dashboard for Citadel. Users all over seem to be excited about this release.  https://twitter.com/HamzaZ21823474/status/1172235176438575105 https://twitter.com/vijaykodam/status/1172237003506798594 To know more about this news, check out the release notes. Other interesting news in Cloud & networking StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies        
Read more
  • 0
  • 0
  • 15410

article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 15310

article-image-containous-introduces-maesh-a-lightweight-and-simple-service-mesh-to-ease-microservices-adoption
Savia Lobo
05 Sep 2019
2 min read
Save for later

Containous introduces Maesh, a lightweight and simple Service Mesh to ease microservices adoption

Savia Lobo
05 Sep 2019
2 min read
Yesterday, Containous, a cloud-native networking company, announced Maesh, a lightweight and simple Service Mesh. Maesh is aimed at making service-to-service communications simpler for developers building modern, cloud-native applications. It is easy to use and fully featured to help developers connect, secure and monitor traffic to and from their microservices-based applications. Mesh also supports the latest Service Mesh Interface specification (SMI), a standard specification for service mesh interoperability in Kubernetes. Maesh allows developers to adopt microservices thus, improving the service mesh experience by offering an easy way to connect, secure and monitor the network traffic in any Kubernetes environment. It helps developers optimize internal traffic, visualize traffic patterns, and secure communication channels, all while improving application performance. Also Read: Red Hat announces the general availability of Red Hat OpenShift Service Mesh Maesh is designed to be completely non-invasive, allowing development teams across the organization to incrementally “opt-in” applications progressively over time. It is backed by Traefik’s rich feature-set thus, providing OpenTracing, load balancing for HTTP, gRPC, WebSocket, TCP, rich routing rules, retries and fail-overs, not to mention access controls, rate limits, and circuit breakers. Maesh can run in both TCP and HTTP mode. “In HTTP mode, Maesh leverages Traefik’s feature set to enable rich routing on virtual-host, path, headers, cookies. Using TCP mode allows seamless and easy integration with SNI routing support,” Containous team reports. It also enables critical features across any Kubernetes environment including observability, Multi-Protocol Support, Traffic Management, Security and Safety. Also Read: Mapbox introduces MARTINI, a client-side terrain mesh generation code In an email statement to us, Emile Vauge, CEO, Containous said, “With Maesh, Containous continues to innovate with the mission to drastically simplify cloud-native adoption for all enterprises. We’ve been proud of how popular Traefik has been for developers as a critical open source solution, and we’re excited to now bring them Maesh.” https://twitter.com/resouer/status/1169310994490748928 To know more about Maesh in detail, read the Containous’ Medium blog post. Other interesting news in Networking Amazon announces improved VPC networking for AWS Lambda functions Pivotal open sources kpack, a Kubernetes-native image build service Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more
Read more
  • 0
  • 0
  • 14950

article-image-internet-governance-project-igp-survey-on-ipv6-adoption-initial-reports
Prasad Ramesh
07 Jan 2019
3 min read
Save for later

Internet governance project (IGP) survey on IPV6 adoption, initial reports

Prasad Ramesh
07 Jan 2019
3 min read
The Internet Governance Project (IGP) did some research last year to understand the factors affecting decisions of network operators for IPV6 adoption. The study was done by Georgia tech’s IGP in collaboration with the Internet Corporation for Assigned Names and Numbers (ICANN) office. A study was commissioned as both IGP and ICANN believed that the internet community needs a better understanding of the motives to upgrade IPV4 to IPV6. The study titled The Hidden Standards War: Economic Factors Affecting IPv6 Deployment should be out this month. IPV6 is a different type of internet protocol with a larger address space. As IPV4 addresses are limited, about 4 billion, they may get depleted in the future. Hence IPV6 adoption will happen sometime. it can hold 2^128 addresses which is more than enough for the foreseeable distant future. IPV6 addresses are also longer than IPV4 and contain both numbers and letters in a hexadecimal form. Initial results of the study The report by IGP is still in the draft stage but they have shared some initial findings. It was found that IPV6 is not going to be disregarded completely after all. Especially in mobile networks where both the hardware and the software support the use of IPV6. Although IPV6 capability is mostly turned off due to lack of compatibility, it still remains. The initial findings show that 79% of the countries, a total of 169, did not have any noteworthy IPV6 deployment. The deployment percentage remained at or even below 5% when the study was conducted last year. 12% of the countries summing up to 26 had an increasing deployment. 8% or 18 countries had shown a plateau in growth where IPv6 capability growth stopped between 8% and 59%. Why the slow adoption? They say that it is all about the costs and benefits associated with upgrading. As economic incentives were investigated, it was found that there is no real need for operators to actually upgrade their hardware. No one uses IPv6 exclusively, as all public and almost all private network service providers have to offer full compatibility. With this condition in place, operators have only three choices: Stick to IPv4 Implement dual stack and provide both Run IPv6 where compatible and run some tunneling for IPv4 compatibility. To move towards IPv6, dual stack is not economical. The third option seems to be the only viable one. There are no benefits for the operators to shift to IPv6. Even if one operator migrated, it puts no pressure on the others to shift. The network operators exclusively bear the maintenance costs. Hence, a wealthier country can deploy more IPv6 networks. Even though it was introduced in 1994, a big problem for forwarding adoption is that IPv6 is incompatible with IPv4. IPv6 adoption can make sense if a network needs to grow, but most networks don’t need to grow. Hence, instead of buying new hardware/software to run IPv6, operators would rather just buy new IPv4 addresses as they are cheaper. Bottom line is, there is no considerable incentive to make a move to change protocol until the remaining IPv4 pool in near depletion. IPv6 support to be automatically rolled out for most Netify Application Delivery Network users Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! 5G – Trick or Treat?
Read more
  • 0
  • 0
  • 14681
article-image-oaths-distributed-network-telemetry-collector-panoptes-is-now-open-source
Melisha Dsouza
04 Oct 2018
3 min read
Save for later

Oath’s distributed network telemetry collector- 'Panoptes' is now Open source!

Melisha Dsouza
04 Oct 2018
3 min read
Yesterday, the Oath network automation team open sourced Panoptes, a distributed system for collecting, enriching and distributing network telemetry. This pluggable, distributed and high-performance data collection system supports multiple polling formats, including SNMP and vendor-specific APIs. It also supports emerging streaming telemetry standards including gNMI. Panoptes is written primarily in Python. It leverages multiple open-source technologies to provide the most value for the least development effort. Panoptes Architecture Source: Yahoo Developers The architecture is designed to enable easy data distribution and integration with other systems. The plugin to push metrics into InfluxDB allows Panoptes to evolve with industry standards. Teams can quickly set up a fully-featured monitoring environment because of the combination of Grafana and the InfluxData ecosystem. There were multiple issues inherent in legacy polling systems, including overpolling due to multiple point solutions for metrics, a lack of data normalization, consistent data enrichment and integration with infrastructure discovery systems. Panoptes aims to overcome all these issues. Check scheduling is accomplished using Celery, which is a horizontally scalable, open-source scheduler that utilizes a Redis data store. Panoptes ships with a simple, CSV-based discovery system. It can be integrated with a CMDB. From there, Panoptes will manage the task of scheduling polling for the desired devices. Users can also develop custom discovery plugins to integrate with their CMDB and other device inventory data sources. Vendors are moving towards a more streamlined model of telemetry. Panoptes’ flexible architecture will minimize the effort required to adopt these new protocols. The metric bus at the center of the model is implemented on Kafka. All data plane transactions flow across this bus. Discovery plugins publish devices to the bus and polling plugins publish metrics to the bus. Similarly, numerous clients read the data off of the bus for additional processing and forwarding. This architecture enables easy data distribution and integration with other systems. The team at Oath has deployed Panoptes in a tiered, federated model. They have developed numerous custom applications on the platform, including a load balancer monitor, a BGP session monitor, and a topology discovery application. All this was done at a reduced cost, thanks to Panoptes. This open-source release is packaged for easy deployment into any Linux-based environment and available on Github. You can head over to Yahoo Developer Network for deeper insights into this news. Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat
Read more
  • 0
  • 0
  • 14532

article-image-libp2p-the-modular-p2p-network-stack-by-ipfs-for-better-decentralized-computing
Melisha Dsouza
09 Oct 2018
4 min read
Save for later

libp2p: the modular P2P network stack by IPFS for better decentralized computing

Melisha Dsouza
09 Oct 2018
4 min read
libp2p is a P2P Network stack introduced by the IPFS community. libp2p is capable of discovering other peers and networks without resourcing to centralized registries that enables apps to work offline. In July 2018, Davis Dias explained that the design of a 'location addressed web' is the reason for its fragility. Small errors in its backbone can lead to shutting down of all running applications. Firewalls, routing issues, roaming issue, and network reliability interfere with users having a smooth experience on the web. Thus came a need to re-imagine the network stack. To solve all the above problems, the InterPlanetary File System (IPFS) came into being. It is a decentralized web protocol based on content-addressing, digital signatures, and peer-to-peer distribution. Today, IPFS is used to build completely distributed (and offline-capable!) web-apps which are also available offline. IPFS saves and distributes valuable datasets, and moves billions of files. IPFS spawned several other projects and libp2p is one of them. It enables users to run network applications free from runtime and address services while being independent of their location. libp2p solves the complexity of dealing with numerous protocols in a decentralized environment. It effectively helps users connect with multiple peers using only a single protocol thus paving the way for the next generation of decentralized systems. Libp2p Features #1 Transport Module libp2p enables application developers to pick the modules needed to run their application. These modules vary depending on the runtime they are executing. A libp2p node uses one or more Transports to dial and listen for connections. These transport modules offer a clean interface for dialing and listening which is defined by the interface-transport specification. #2 No prior assigning of ports Before libp2p came into existence, users would assign a listener to a port and then assign ports to special protocols. This was done so that other hosts would know in advance which port to dial. With libp2p users do not have to assign ports beforehand. #3 Encrypted communication To ensure an encrypted connection, libp2p also supports a set of modules that encrypt every communication established. #4 Peer Discovery and Routing A peer discovery module helps libp2p to find peers to connect to. Peer routing finds other peers in the network by intentionally issuing queries, which can be iterative or recursive, until a peer is found. Content routing mechanism is used to find where content lives in the network. Using libp2p in IPFS libp2p is now refactored into its own project so that other users can take advantage of it and be part of its ecosystem as well. It is what provides IPFS and other projects the P2P connectivity, support for multiple platforms and browsers and many other advantages. Users can utilize the libp2p module to create their own libp2p bundle. They can customize their bundles with features and default setup. It also takes into account a user's needs. For example, the team has built a browser working version of libp2p that acts as the network layer of IPFS and leverages browser transports. You can head over to GitHub to check this example. Keep Networks has also demonstrated the use of libp2p. Since participants need to know how to connect to each other, the team has come up with a simple example of peer-to-peer discovery. They have used a few pieces of the libp2p JS library to create nodes that discover and communicate with each other. You can head over to their blog to check out how the example works. Another emerging use for libP2P is in blockchain applications. IPFS is used by blockchains and blockchain applications, and its subprotocols (libp2p, multihash, IPLD) can be extremely useful for blockchain standardization. A good  example of this would be getting the ethereum blockchain in the browser or in a Node.js process using libp2p and running it through ethereum-vm. That being said, there are multiple challenges that developers will encounter while using libP2P for their Blockchain examples. Chris Pacia, the backend developer for OB1, explains how developers can face these challenges in his talk at QCon. With all the buzz around blockchains and decentralized computing these days, libp2p is making its rounds on the internet. For more insights on libp2p, you can visit their official site. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed
Read more
  • 0
  • 0
  • 14044

article-image-linkerd-2-3-introduces-zero-trust-networking-for-kubernetes
Savia Lobo
19 Apr 2019
2 min read
Save for later

Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes

Savia Lobo
19 Apr 2019
2 min read
This week, the team at Linkerd announced an updated version of the service mesh, Linkerd 2.3. In this release, the mTLS is out of experimental to a fully supported feature. Along with several important security primitives, the important update in Linkerd 2.3 is that it turns authenticated, confidential communication between meshed services on by default. Linkerd, a Cloud Native Computing Foundation (CNCF) project, is a service mesh, designed to give platform-wide observability, reliability, and security without requiring configuration or code changes. The team at Linkerd says, “Securing the communication between Kubernetes services is an important step towards adopting zero-trust networking. In the zero-trust approach, we discard assumptions about a datacenter security perimeter and instead push requirements around authentication, authorization, and confidentiality “down” to individual units. In Kubernetes terms, this means that services running on the cluster validate, authorize, and encrypt their own communication.” Linkerd 2.3 addresses challenges with the adoption of zero-trust networking as follows: The control plane ships with a certificate authority (called simply “identity”). The data plane proxies receive TLS certificates from this identity service, tied to the Kubernetes Service Account that the proxy belongs to, rotated every 24 hours. The data plane proxies automatically upgrade all communication between meshed services to authenticated, encrypted TLS connections using these certificates. Since the control plane also runs on the data plane, communication between control plane components is secured in the same way. All of these changes mentioned are enabled by default and requires no configuration. “This release represents a major step forward in Linkerd’s security roadmap. In an upcoming blog post, Linkerd creator Oliver Gould will be detailing the design tradeoffs in this approach, as well as covering Linkerd’s upcoming roadmap around certificate chaining, TLS enforcement, identity beyond service accounts, and authorization”, the Linkerd’s official blog mentions. These topics and all the other fun features in 2.3 will be further discussed in the upcoming Linkerd Online Community Meeting on Wednesday, April 24, 2019 at 10am PT. To know more about Linkerd 2.3 in detail, visit its official website. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more
Read more
  • 0
  • 0
  • 13990
article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 13866

article-image-slack-was-down-for-an-hour-yesterday-causing-disruption-during-work-hours
Fatema Patrawala
30 Jul 2019
2 min read
Save for later

Slack was down for an hour yesterday, causing disruption during work hours

Fatema Patrawala
30 Jul 2019
2 min read
Yesterday, Slack reported of an outage which started at 7:23 a.m. PDT and was fully resolved at 8:48 a.m. PDT. The Slack status page said that some people had issues in sending messages while others couldn't access their channels at all. Slack said it was fully up and running again about an hour after the issues emerged. https://twitter.com/SlackStatus/status/1155869112406437889 According to Business Insider, more than 2,000 users reported issues with Slack via Downdetector. Employees around the globe, rely on Slack to communicate, organize tasks and share information. Downdetector’s live outage map showed a concentration of reports in the United States and a few of them in Europe and Japan. Slack has not yet shared the reason which caused the disruption on its status page. Last month as well Slack had suffered an outage which was caused due to server unavailability. Users took to Twitter sending funny memes and gifs about how they really depend on Slack all the time to communicate. https://twitter.com/slabodnick/status/1155858811518930946 https://twitter.com/gbhorwood/status/1155864432527867905 https://twitter.com/envyvenus/status/1155857852625555456 https://twitter.com/nhetmalaluan/status/1155863456991436800 While on Hacker News, users were annoyed and said that such issues have become quite common. One user commented, “This is becoming so often it's embarrassing really. The way it's handled in the app is also not ideal to say the least - only indication that something is wrong is that the text you are trying to send is greyed out.” Why did Slack suffer an outage on Friday? How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services
Read more
  • 0
  • 0
  • 13440
Modal Close icon
Modal Close icon