Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-go-cloud-is-googles-bid-to-establish-golang-as-the-go-to-language-of-cloud
Richard Gall
25 Jul 2018
2 min read
Save for later

Go Cloud is Google's bid to establish Golang as the go-to language of cloud

Richard Gall
25 Jul 2018
2 min read
Google's Go is one of the fastest growing programming languages on the planet. But Google is now bidding to make it the go-to language for cloud development. Go Cloud, a new library that features a set of tools to support cloud development, has been revealed in a blog post published yesterday. "With this project," the team explains, "we aim to make Go the language of choice for developers building portable cloud applications." Why Go Cloud now? Google developed Go Cloud because of a demand for a way of writing, simpler applications that aren't so tightly coupled to a single cloud provider. The team did considerable research into the key challenges and use cases in the Go community to arrive at Go Cloud. They found that the increased demand for multi-cloud or hybrid cloud solutions wasn't being fully leveraged by engineering teams, as there is a trade off between improving portability and shipping updates. Essentially, the need to decouple applications was being pushed back by the day-to-day pressures of delivering new features. With Go Cloud, developers will be able to solve this problem and develop portable cloud solutions that aren't tied to one cloud provider. What's inside Go Cloud? Go Cloud is a library that consists of a range of APIs. The team has "identified common services used by cloud applications and have created generic APIs to work across cloud providers." These APIs include: Blob storage MySQL database access Runtime configuration A HTTP server configured with request logging, tracing, and health checking At the moment Go Cloud is compatible with Google Cloud Platform and AWS, but say they plan "to add support for additional cloud providers very soon." Try Go Cloud for yourself If you want to see how Go Cloud works, you can try it out for yourself - this tutorial on GitHub is a good place to start. You can also stay up to date with news about the project by joining Google's dedicated mailing list.   Google Cloud Launches Blockchain Toolkit to help developers build apps easily Writing test functions in Golang [Tutorial]
Read more
  • 0
  • 0
  • 16619

article-image-microsoft-supercharges-its-azure-ai-platform-with-new-features
Gebin George
14 Jun 2018
2 min read
Save for later

Microsoft supercharges its Azure AI platform with new features

Gebin George
14 Jun 2018
2 min read
Microsoft recently announced few innovations to their AI platform powered by Microsoft Azure. These updates are well aligned to their Digital Transformation strategy of helping organizations augment their machine learning capabilities for better performance. Cognitive Search Cognitive Search is a new feature in Azure portal which leverages the power of AI to understand the content and append the information into Azure Search. It also has support for different file-readers like PDF, office documents.It also enables OCR capabilities like key phrase extraction, language detection, image analysis and even facial recognition. So the initial search will pull all the data from various resources and then apply cognitive skills to store data in the optimized index. Azure ML SDK for Python In the Azure Machine Learning ecosystem, this additional SDK facilitates the developers and the data scientists to execute key AML workflows, Model training, Model deployment, and scoring directly using a single control plane API within Python Azure ML Packages Microsoft now offers Azure ML packages that represent a rich set of pip- installable extensions to Azure ML. This makes the process of building efficient ML models more streamlined by building on deep learning capabilities of Azure AI platform. ML.NET This cross-platform open source framework is meant for .NET developers and provides enterprise-grade software libraries of latest innovations in Machine Learning and platforms that includes Bing, Office, and Windows. This service is available in the AI platform for preview. Project Brainware This service is also available on Azure ML portal for preview. This architecture is essentially built to process deep neural networks; it uses hardware acceleration to enable fast AI. You can have a look at the Azure AI portal for more details. New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Epicor partners with Microsoft Azure to adopt Cloud ERP SAP Cloud Platform is now generally available on Microsoft Azure  
Read more
  • 0
  • 0
  • 16570

article-image-learn-azure-serverless-computing-free-download-ebook-microsoft
Packt
05 Mar 2018
2 min read
Save for later

Learn Azure serverless computing for free - Download a free eBook from Microsoft

Packt
05 Mar 2018
2 min read
There has been a lot of noise around serverless computing over the last couple of years. There have been arguments that it’s going to put the container revolution to bed, and while that’s highly unlikely (containers and serverless are simply different solutions that are appropriate in different contexts), it’s significant that a trend like serverless could emerge so quickly to capture the attention of engineers and architects. It says a lot about the rapidly changing nature of software infrastructures and the increased demands for agility, scalability, and power. Azure is a cloud solution that’s only going to help drive serverless adoption further. But we know there’s always some trepidation among tech decision makers when choosing to implement something new or use a new platform. That’s why we’re delighted to be partnering with Microsoft Azure to give the world free access to Azure Serverless Computing Cookbook. Packed with more than 50 Azure serverless tutorials and recipes to help solve common and not so common challenges, this 325-page eBook is both a useful introduction to Azure’s serverless capabilities and a useful resource for anyone already acquainted with it. Simply click here to go to Microsoft Azure to download the eBook for free.
Read more
  • 0
  • 0
  • 16569

article-image-making-your-new-normal-safer-with-recaptcha-enterprise-from-cloud-blog
Matthew Emerick
15 Oct 2020
4 min read
Save for later

Making your new normal safer with reCAPTCHA Enterprise from Cloud Blog

Matthew Emerick
15 Oct 2020
4 min read
Traffic from both humans and bots are at record highs. Since March 2020, reCAPTCHA has seen a 40% increase in usage - businesses and services that previously saw most of their users in person have shifted to online-first or online-only. This increased demand for online services and transactions can expose businesses to various forms of online fraud and abuse, and without dedicated teams familiar with these attacks and how to stop them, we’ve seen hundreds of thousands of new websites come to reCAPTCHA for visibility and protection. During COVID-19, reCAPTCHA is playing a critical role helping global public sector agencies to distribute masks and other supplies, provide up-to-date information to constituents, and secure user accounts from distributed attacks. The majority of these agencies are using the score-based detection that comes from reCAPTCHA v3 or reCAPTCHA Enterprise instead of showing the visual or audio challenges found in reCAPTCHA v2. This reduces friction for users and also gives teams flexibility on how to take action on bot requests and fraudulent activity. reCAPTCHA Enterprise can also help protect your business. Whether you’re moving operations online for the first time or have your own team of security engineers, reCAPTCHA can help you detect new web attacks, understand the threats, and take action to keep your users safe. Many enterprises lack visibility in parts of their site, and adding reCAPTCHA helps to expose costly attacks before they happen. The console shows the risk associated with each action to help your business stay ahead. Unlike many other abuse and fraud fighting platforms, reCAPTCHA doesn’t rely on invasive fingerprinting. These techniques can often penalize privacy-conscious users who try to keep themselves safe with tools such as private networks, and are in conflict with browsers’ pushes for privacy-by-default. Instead, we’ve shifted our focus to in-session behavioral risk analysis, detecting fraudulent behavior rather than caring about who or what is behind the network connection. We’ve found this to be extremely effective in detecting attacks in a world where adversaries have control of millions of IP addresses and compromised devices, and regularly pay real humans to manually bypass detections. Since we released reCAPTCHA Enterprise last year, we’ve been able to work closer with existing and new customers, collaborating on abuse problems and determining best practices in specific use cases, such as account takeovers, carding, and scraping. The more granular score distribution that comes with reCAPTCHA Enterprise gives customers more fine-tuned control over when and how to take action. reCAPTCHA Enterprise learns how to score requests specific to the use case, but the score is also best used in a context-specific way. Our most successful customers use features to delay feedback to adversaries, such as limiting capabilities of suspicious accounts, requiring additional verification for sensitive purchases, and manually moderating content likely generated by a bot.  We also recently released a report by ESG where they evaluated the effectiveness of reCAPTCHA Enterprise as deployed in a real-world hyperscale website to protect against automated credential stuffing and account takeover attacks. ESG noted: “Approximately two months after reCAPTCHA Enterprise deployment, login attempts dropped by approximately 90% while the registered user base grew organically.” We’re continually developing new types of signals to detect abuse at scale. Across the four million sites with reCAPTCHA protections enabled, we defend everything from accounts, to e-commerce transactions, to food distribution after disasters, to voting for your favorite celebrity. Now more than ever, we’re proud to be protecting our customers and their users. To see reCAPTCHA Enterprise in action, check out our latest video. To get started with reCAPTCHA Enterprise, contact our sales team. Related Article Protect your organization from account takeovers with reCAPTCHA Enterprise How reCAPTCHA Enterprise helps protect your websites from fraudulent activity like account takeovers and hijacking Read Article
Read more
  • 0
  • 0
  • 16482

article-image-nmap-7-80-releases-with-a-new-npcap-windows-packet-capture-driver-and-other-80-improvements
Vincy Davis
14 Aug 2019
3 min read
Save for later

Nmap 7.80 releases with a new Npcap Windows packet capture driver and other 80+ improvements!

Vincy Davis
14 Aug 2019
3 min read
On August 10, Gordon Lyon, the creator of Nmap announced the release of Nmap 7.80 during the recently concluded DefCon 2019 in Las Vegas. This is a major release of Nmap as it contains 80+ enhancements and is the first stable release in over a year. The major highlight of this release is the newly built Npcap Windows packet capturing library. Ncap uses modern APIs and accords better performance, features and is more secure. What’s new in Nmap 7.80? Npcap Windows packet capture driver: Npcap is based on the discontinued WinPcap library, but with improved speed, portability, and efficiency. It uses the ‘Libpcap‘ library which enables Windows applications to use a portable packet capturing API and supported on Linux and Mac OS X. Npcap can optionally be restricted to only allow administrators to sniff packets, thus providing increased security. New 11 NSE scripts added: NSE scripts has been added from 8 authors, thus taking the total number of NSE scripts to 598. The new 11 scripts are focussed on HID devices, Jenkins servers, HTTP servers, Logical Units (LU) of TN3270E servers and more. pcap_live_open has been replaced with pcap_create: pcap_create solves the packet loss problems on Linux and also performance improvements on other platforms. rand.lua library: The new ‘rand.lua’ library uses the best sources of random available on the system to generate random strings. oops.lua library: This new library helps in easily reporting errors, including plenty of debugging details. TLS support added: TLS support has been added to rdp-enum-encryption, which enables the regulation of protocol version against servers that require TLS. New service probe and match lines: New service probe and match lines have been added for adb and the Android Debug Bridge, to enable remote code execution. Two new common error strings: Two new common error strings has been added to improve MySQL detection by the script http-sql-injection. New script-arg http.host: It allows users to force a particular value for the Host header in all HTTP requests. Users love the new improvements in Nmap 7.80. https://twitter.com/ExtremePaperC/status/1160388567098515456 https://twitter.com/Jiab77/status/1160555015041363968 https://twitter.com/h4knet/status/1161367177708093442 For the full list of changes in Nmap 7.80, head over to the Nmap announcement. Amazon adds UDP load balancing support for Network Load Balancer Brute forcing HTTP applications and web applications using Nmap [Tutorial] Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap[Tutorial]
Read more
  • 0
  • 0
  • 16482

article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 16390
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-an-update-on-bcachefs-the-next-generation-linux-filesystem
Melisha Dsouza
03 Dec 2018
3 min read
Save for later

An update on Bcachefs- the “next generation Linux filesystem”

Melisha Dsouza
03 Dec 2018
3 min read
Kent Overstreet announced Bcachefs as “the COW filesystem for Linux that won't eat your data" in 2015. Since then the system has undergone numerous updates and patches to get to be where it is today. On the 1st of December, Overstreet published an update on the problems and improvements that are currently being worked upon in Bcachefs. Status update on Bcachefs After the last update, Overstreet has focussed on two major areas of improvement- atomicity of filesystem operations and non-persistence of allocation information (per bucket sector counts). The filesystem operations that had anything to do with i_nlink were not atomic. On startup, the system would have to scan and recalculate i_nlink and also delete no longer referenced inodes. Also, because of non-persistence of allocation information, on startup, the system would have to recalculate all the accounting disk space. The team has now been able to get everything to be fully atomic except for fallocate/fcollapse/etc. After an unclean shutdown, the only thing to be done is scan the inodes btree for inodes that have been deleted. Erasure coding is about 80% done now in Bcachefs. Overstreet is now focussed on persistent allocation information. This will then allow him to focus on ‘reflink’ which in turn will be useful to the company that's funding bcachefs development. This is because the reflinked extent refcounts will be much too big to keep in memory and hence will l have to be kept in a btree and updated whenever doing extent updates. The infrastructure needed to make that happen also depends on making disk space accounting persistent. After all of these updates, he claims bcachefs will have fast mounts (including after unclean shutdown). He is also working on some improvements to disk space accounting for multi-device filesystems which will lead up to fast mounts after clean Shutdowns. To know if a user can safely mount in degraded mode, they will have to store a list of all the combinations of disks that have data replicated across them (or are in an erasure coded stripe) - without any kind of fixed layout, like regular RAID does. Why should you choose Bcachefs? Overstreet announced that Bcachefs is stable, fast, and has a small and clean code-base, along with  the necessary features to be a modern Linux file-system. It has a long list of features, completed or in progress: Copy on write (COW) - like zfs or btrfs Full data and metadata checksumming Caching Compression Encryption Snapshots Scalable Bcachefs prioritizes robustness and reliability According to Kent, Bcachefs ensures that customers won't lose their data. The Bcachefs is an extension of bcache where the bcache was designed as a caching layer to improve block I/O performance. It uses a solid-state drive as a cache for a (slower, larger) underlying storage device. Mainline bcache is not a typical filesystem but looks like a special kind of block device. It handles the movement of blocks of data between fast and slow storage, ensuring that the most frequently used data is kept on the faster device. bcache manages data in a way that yields high performance while ensuring that no data is ever lost, even when an unclean shutdown takes place. You can head over to LKML.org for more information on this announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 16350

article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 16313

article-image-pivotal-open-sources-kpack-a-kubernetes-native-image-build-service
Sugandha Lahoti
23 Aug 2019
2 min read
Save for later

Pivotal open sources kpack, a Kubernetes-native image build service

Sugandha Lahoti
23 Aug 2019
2 min read
In April, Pivotal and Heroku teamed up to create Cloud Native Buildpacks for Kubernetes. Cloud-Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. Yesterday, they open-sourced kpack, which is a set of experimental build service Kubernetes resource controllers. Basically, kpack is Kubernetes’ native way to build and update containers. It automates the creation and update of container images that can be run anywhere. Pivotal’s commercial implementation of kpack comes via Pivotal Build Service. Users can use it atop Kubernetes to boost developer productivity. The Build Service integrates kpack with buildpacks and the Kubernetes permissions model. kpack presents a CRD as its interface, and users can interact with all Kubernetes API tooling including kubectl. Pivotal has open-sourced kpack for two reasons, as mentioned in their blog post. “First, to provide Build Service’s container building functionality and declarative logic as a consumable component that can be used by the community in other great products. Second, to provide a first-class interface, to create and modify image resources for those who desire more granular control.” Many companies and communities have announced that they will be using Kpack in their projects. Project riff will use kpack to build functions to handle events. The Cloud Foundry community plans to feature kpack as the new app staging mechanism in the Cloud Foundry Application Runtime. Check out the kpack repo for more details. You can also request alpha access to Build Service. In other news, Pivotal and VMware, the former’s parent company are negotiating a deal for VMware to acquire Pivotal as per a recent regulatory filing from Dell. VMware, Pivotal, and Dell have jointly filed the document informing the government regulators about the potential transaction. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads.
Read more
  • 0
  • 0
  • 16270

article-image-netflix-releases-flamescope
Richard Gall
06 Apr 2018
2 min read
Save for later

Netflix releases FlameScope

Richard Gall
06 Apr 2018
2 min read
Netflix has released FlameScope, a visualization tool that allows software engineering teams to monitor performance issues. From application startup to single threaded execution, FlameScope will provide real time insight into the time based metrics crucial to software performance. The team at Netflix has made FlameScope open  source, encouraging engineers to contribute to the project and help develop it further - we're sure that many development teams could derive a lot of value from the tool, and we're likely to see many customisations as its community grows. How does FlameScope work? Watch the video below to learn more about FlameScope. https://youtu.be/cFuI8SAAvJg Essentially, FlameScope allows you to build something a bit like a flame graph, but with an extra dimension. One of the challenges that Netflix identified that flame graphs sometimes have is that while they allow you to analyze steady and consistent workloads, "often there are small perturbations or variation during that minute that you want to know about, which become a needle-in-a-haystack search when shown with the full profile". With FlameScope, you get the flame graph, but by using a subsecond-offset heat map, you're also able to see the "small perturbations" you might have otherwise missed. As Netflix explains: "You can select an arbitrary continuous time-slice of the captured profile, and visualize it as a flame graph." Why Netflix built FlameScope FlameScope was built by the Netflix cloud engineering team. The key motivations for building it are actually pretty interesting. The team had a microservice that was suffering from strange spikes in latency, the cause a mystery. One of the members of the team found that these spikes, which occurred around every fifteen minutes appeared to correlate with "an increase in CPU utilization that lasted only a few seconds." CPU frame graphs, of course, didn't help for the reasons outlined above. To tackle this, the team effectively sliced up a flame graph into smaller chunks. Slicing it down into one second snapshots was, as you might expect, a pretty arduous task, so by using subsecond heatmaps, the team was able to create flamegraphs on a really small scale. This made it much easier to visualize those variations. The team are planning to continue to develop the FlameScope project. It will be interesting to see where they decide to take it and how the community responds. To learn more read the post on the Netflix Tech Blog.
Read more
  • 0
  • 0
  • 16259
article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 16211

article-image-introducing-numpywren-a-system-for-linear-algebra-built-on-a-serverless-architecture
Sugandha Lahoti
29 Oct 2018
3 min read
Save for later

Introducing numpywren, a system for linear algebra built on a serverless architecture

Sugandha Lahoti
29 Oct 2018
3 min read
Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel. What is numpywren? Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of "cores" and "memory". Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction. Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours. They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. Why serverless for Numpywren? Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms. What’s next for Numpywren? One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives. The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem. The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel. As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research. For further explanation, go through the research paper. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 16176

article-image-introducing-platform9-managed-kubernetes-service
Amrata Joshi
04 Feb 2019
3 min read
Save for later

Introducing Platform9 Managed Kubernetes Service

Amrata Joshi
04 Feb 2019
3 min read
Today, the team at Platform9, a company known for its SaaS-managed hybrid cloud, introduced a fully managed, enterprise-grade Kubernetes service that works on VMware with full SLA guarantee. It enables enterprises to deploy and run Kubernetes easily without the need of management overhead and advanced Kubernetes expertise. It features enterprise-grade capabilities including multi-cluster operations, zero-touch upgrades, high availability, monitoring, and more, which are handled automatically and backed by SLA. PMK is part of the Platform9’s hybrid cloud solution, which helps organizations in centrally managing VMs, containers and serverless functions on any environment. Enterprises can support Kubernetes at scale alongside their traditional VMs, legacy applications, and serverless functions. Features of Platform9 Managed Kubernetes Self Service, Cloud Experience IT Operations and VMware administrators can now help developers with simple, self-service provisioning and automated management experience. It is now possible to deploy multiple Kubernetes clusters with a click of a button that is operated under the strictest SLAs. Run Kubernetes anywhere PMK allows organizations to run Kubernetes instantly, anywhere. It also delivers centralized visibility and management across all Kubernetes environments including on-premises, public cloud, or at the Edge. This helps the organizations to drop shadow IT and VM/Container sprawl and ensure compliance. It improves utilization and reduces costs across all infrastructure. Speed Platform9 Managed Kubernetes (PMK) allows enterprises to run in less than an hour on VMware. It also eliminates the operational complexity of Kubernetes at scale. PMK helps enterprises to modernize their VMware environments without the need of any hardware or configuration changes. Open Ecosystem Enterprises can benefit from the open source community and all the Kubernetes-related services and applications by delivering open source Kubernetes on VMware without code forks. It ensures portability across environments. Sirish Raghuram, Co-founder, and CEO of Platform9 said, “Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage. We’re proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment.” To learn more about delivering Kubernetes on VMware, check out the demo video. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more
Read more
  • 0
  • 0
  • 16167
article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 16016

article-image-linux-4-20-kernel-slower-than-its-previous-stable-releases-spectre-flaw-to-be-blamed-according-to-phoronix
Melisha Dsouza
19 Nov 2018
3 min read
Save for later

Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix

Melisha Dsouza
19 Nov 2018
3 min read
On the 4th of November, Linux 4.20 rc-1 was released with a host of notable changes right from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, and other new hardware support additions and software features. The release that was supposed to upgrade the kernel’s performance, did not succeed in doing so. On the contrary, the kernel is much slower as compared to previous Linux kernel stable releases. In a blog released by Phoronix, Michael Larabel,e lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org, discussed the results of some tests conducted on the kernel. He bisected the 4.20 kernel merge window to explore the reasons for the significant slowdowns in the kernel for many real-world workloads. The article attributes this degrade in performance to the Spectre Flaws in the processor. In order to mitigate against the Spectre flaw, an intentional kernel change was made.The change is termed as  "STIBP" for cross-hyperthread Spectre mitigation on Intel processors. Single Thread Indirect Branch Predictors (STIBP) prevents cross-hyperthread control of decisions that are made by indirect branch predictors. The STIBP addition in Linux 4.20 will affect systems that have up-to-date/available microcode with this support and where a user’s CPU has Hyper-Threading enabled/present. Performance issues in Linux 4.20 Michael has done a detailed analysis of the kernel performance and here are some of his findings. Many synthetic and real-world tests showed that the Intel Core i9 performance was not upto the mark. The Rodinia scientific OpenMP tests took 30% longer, Java-based DaCapo tests taking up to ~50% more time to complete, the code compilation tests also extended in length. There was lower PostgreSQL database server performance and longer Blender3D rendering times. All this was noticed in Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade. The latest Linux kernel Git benchmarks also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems. The tests conducted found the  Smallpt renderer to slow down significantly PHP performance took a major dive, HMMer also faced a major setback compared to the current Linux 4.19 stable series. What is surprising is that there are mitigations against Spectre, Meltdown, Foreshadow, etc in Linux 4.19 as well. But 4.20 shows an additional performance drop on top of all the previously outlined performance hits this year. In the entire testing phase, the AMD systems didn’t appear to be impacted. This would mean if a user disables Spectre V2 mitigations to account for better performance- the system’s security could be compromised. You can head over to Phoronix for a complete analysis of the test outputs and more information on this news. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 15928
Modal Close icon
Modal Close icon