Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Networking

54 Articles
article-image-red-hat-enterprise-linux-7-6-beta-released-with-security-cloud-automation
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation

Sugandha Lahoti
24 Aug 2018
2 min read
Red Hat has rolled out their Red Hat Enterprise Linux 7.6 beta in their goal of becoming the cloud powerhouse. This release focuses on security and compliance, automation, and cloud deployment features. Linux security improvements As far as Linux based security is considered, some improvements made include: GnuTLS library with Hardware Security Module (HSM) support Strengthened OpenSSL for mainframes Enhancements to the nftables firewall Integration of Berkeley Packet Filter (eBPF) to provide a safer mechanism for monitoring Linux kernel activity Hybrid cloud deployment-related changes Red Hat Enterprise Linux 7.6 has introduced a variety of cloud deployment improvements. Red Hat’s Paul Cormier considers the hybrid cloud to be the default technology choice. “Enterprises want the best answers to meet their specific needs, regardless of whether that’s through the public cloud or on bare metal in their own datacenter.” For starters, Red Hat Enterprise Linux 7.6 uses Trusted Platform Module (TPM) 2.0 hardware modules to enable Network Bound Disk Encryption (NBDE). This provides two layers of security features for hybrid cloud operations: The network-based mechanism works in the cloud, On-premises TPM helps to keep information on disks more secure. They have also introduced Podman, a part of Red Hat's lightweight container toolkit. It adds enterprise-grade security features to containers. Podman complements Buildah and Skopeo by enabling users to run, build, and share containers using the command line interface. It can also work with CRI-O, a lightweight Kubernetes containers runtime. Management and Automation The latest beta version also adds enhancements to the Red Hat Enterprise Linux Web Console including: Showing available updates on the system summary pages. Automatic configuration of single sign-on for identity management, helping to simplify this task for security administrators. An interface to control firewall services. These are just a select few updates. For a more detailed coverage, go through the release notes available on the Red Hat Blog. Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available. What RedHat and others announced at KubeCon + CloudNativeCon 2018. RedHat and others launch Istio 1.0 service mesh for microservices.
Read more
  • 0
  • 0
  • 13250

article-image-microsoft-adobe-and-sap-share-new-details-about-the-open-data-initiative
Natasha Mathur
28 Mar 2019
3 min read
Save for later

Microsoft, Adobe, and SAP share new details about the Open Data Initiative

Natasha Mathur
28 Mar 2019
3 min read
Earlier this week at the Adobe Summit, world’s largest conference focused on Customer Experience Management, Microsoft, Adobe and SAP announced that they’re expanding their Open Data Initiative. CEOs of Microsoft, Adobe, and SAP announced the launch of the Open Data Initiative at the Microsoft Ignite Conference in 2018. The core idea behind Open Data Initiative is to make it easier for the customers to move data between each others’ services. Now, the three partners are looking forward to transforming customer experiences with the help of real-time insights that will be delivered via the cloud. They have also come out with a common approach and a set of resources for customers to help customers create new connections across previously siloed data. Read Also: Women win all open board director seats in Open Source Initiative 2019 board elections “From the beginning, the ODI has been focused on enhancing interoperability between the applications and platforms of the three partners through a common data model with data stored in a customer-chosen data lake”, reads the Microsoft announcement. This unified data lake offers customers their choice of development tools and applications to build and deploy services. Also, these companies have come out with a new approach for publishing, enriching and ingesting initial data feeds from Adobe Experience Platform into a customer’s data lake. The whole approach will be activated via Adobe Experience Cloud, Microsoft Dynamics 365, Office 365 and SAP C/4HANA. This, in turn, will provide a new level of AI enrichment, helping firms serve their customers better. Moreover, to further advance the development of the initiative, Adobe, Microsoft and SAP, also shared the details about their plans to summon a Partner Advisory Council. This Partner Advisory Council will comprise over a dozen firms including Accenture, Amadeus, Capgemini, Change Healthcare, Cognizant, etc. Microsoft states that these organizations believe there is a significant opportunity in the ODI to help them offer altogether new value to their customers. “We’re excited about the initiative Adobe, Microsoft and SAP have taken in this area, and we see a lot of opportunity to contribute to the development of ODI”, states Stephan Pretorius, CTO, WPP. Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Microsoft announces: Microsoft Defender ATP for Mac, a fully automated DNA data storage, and revived office assistant Clippy Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio
Read more
  • 0
  • 0
  • 13234

article-image-opus-1-3-a-popular-foss-audio-codec-with-machine-learning-and-vr-support-is-now-generally-available
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available

Amrata Joshi
22 Oct 2018
3 min read
Last week, the team at Opus announced the general availability of Opus Audio Codec version 1.3. Opus 1.3 comes along with a new set of features, namely, a recurrent neural network, reliable speech/music detector, convenience, ambisonics support, efficient memory, compatibility with RFC 6716 and a lot more. Opus is an open and royalty-free audio codec, which is highly useful for all audio applications, right from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is included in all major browsers and mobile operating systems, used for a wide range of applications and is the default WebRTC codec. New features in Opus Audio Codec 1.3 Reliable speech/music detector powered by machine learning Opus 1.3 promises a new speech/music detector. As it is based on a recurrent neural network, it is way simpler and reliable than the detector used in version 1.1.The speech/music detector in earlier versions was based on a simple (non-recurrent) neural network, followed by an HMM-based layer to combine the neural network results over time. Opus 1.3 introduces a new recurrent neuron which is the Gated Recurrent Unit (GRU). The GRU does not just learn how to use its input and memory at a time, but it also promises to learn, how and when to update its memory. This, in turn, helps it to remember information for a longer period of time. Mixed Content encoding gets better Mixed content encoding, especially at bit rates below 48 kb/s, will get more convenient as the new detector helps in improving the performance of Opus. Developers will experience a great change in speech encoding at lower bit rates, both for mono and stereo. Encode 3D audio soundtracks for VR easily This release comes along with ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos. Opus detector won’t take much of your space The Opus detector has just 4986 weights (that fit in less than 5 KB) and takes about 0.02% memory of CPU to run in real-time, instead of thousands of neurons and millions of weights running on a GPU. Additional Updates Improvements in Security/hardening, Voice Activity Detector (VAD), and speech/music classification using an RNN are simply add-ons. The major bug fixes in this release are CELT PLC and bandwidth detection fixes. Read more about the release on Mozilla’s official website. Also, check out a demo for more details. YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Google releases Oboe, a C++ library to build high-performance Android  audio apps How to perform Audio-Video-Image Scraping with Python
Read more
  • 0
  • 0
  • 13023

article-image-soon-rhel-red-hat-enterprise-linux-wont-support-kde
Amrata Joshi
05 Nov 2018
2 min read
Save for later

Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE

Amrata Joshi
05 Nov 2018
2 min read
Later last week, Red Hat announced that RHEL has deprecated KDE (K Desktop Environment) support. KDE Plasma Workspaces (KDE) is an alternative to the default GNOME desktop environment for RHEL. Major future release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. In the 90’s, the Red Hat team was entirely against KDE and had put lots of effort into Gnome. Since Qt was under a not-quite-free license that time, the Red Hat team was firmly behind Gnome. Steve Almy, principal product manager of Red Hat Enterprise Linux, told the Register, “Based on trends in the Red Hat Enterprise Linux customer base, there is overwhelming interest in desktop technologies such as Gnome and Wayland, while interest in KDE has been waning in our installed base.” Red Hat heavily backs the Linux desktop environment GNOME, which is developed as an independent open-source project. Also, it is used by a large bunch of other distros. Although Red Hat is indicating the end of KDE support in RHEL, KDE is very much its own independent project that will continue on its own, with or without support from future RHEL editions. Almy said, “While Red Hat made the deprecation note in the RHEL 7.6 notes, KDE has quite a few years to go in RHEL's roadmap.” This is simply a warning that certain functionality may be removed or replaced from RHEL in the future with functionality similar or more advanced to the one deprecated. Though KDE, as well as anything listed in Chapter 51 of the Red Hat Enterprise Linux 7.6 release notes,  will continue to be supported for the life of Red Hat Enterprise Linux 7. Read more about this news on the official website of Red Hat. Red Hat released RHEL 7.6 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 12972

article-image-red-hat-enterprise-linux-7-5-rhel-7-5-now-generally-available
Savia Lobo
11 May 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available

Savia Lobo
11 May 2018
2 min read
Red Hat recently announced that its latest enterprise distribution, Red Hat Enterprise linux version 7.5 (RHEL 7.5) is now generally available. This release aims at simplifying hybrid computing. The RHEL 7.5 is packed with multiple features for the server administrators and developers. New features in the RHEL 7.5 RHEL 7.5 provides support for Network Bound Disk Encrypted (NBDE) devices, new Red Hat cluster management capabilities, and compliance management features. Enhancements to the cockpit administrator console. Cockpit provides a simplified web interface to help eliminate complexities around Linux system administration. This makes it easier for new administrators, or administrators moving over from non-Linux systems, to better understand the health and status of their operations. Helps cut down storage costs by providing improved compliance controls and security, enhanced usability, and tools to cut down storage costs. Better Integration with Microsoft Windows infrastructure both in Microsoft Azure and on-premise. This includes improved management and communication with Windows Server, more secure data transfers with Azure, and performance improvements when used within Active Directory architectures. If one wishes to use both RHEL and Windows for their network, RHEL 7.5 serves this purpose. Delivers improved software security controls to alleviate risk while also augmenting IT operations. A significant component of these controls is security automation via the integration of OpenSCAP with Red Hat Ansible Automation. This is aimed at facilitating the development of Ansible Playbooks straight from OpenSCAP scans which, in turn, can be leveraged to execute remediations more consistently and quickly across a hybrid IT environment. Provides high availability support for enterprise applications running on Amazon Web Services or Microsoft Azure with Pacemaker support in public clouds via the Red Hat High Availability Add-On and Red Hat Enterprise Linux for SAP® Solutions. To know more about this release in detail read Red Hat official blog. Linux Foundation launches the Acumos Al Project to make AI accessible How to implement In-Memory OLTP on SQL Server in Linux Kali Linux2 released    
Read more
  • 0
  • 0
  • 12664

article-image-spacex-shares-new-information-on-starlink-after-the-successful-launch-of-60-satellites
Sugandha Lahoti
27 May 2019
3 min read
Save for later

SpaceX shares new information on Starlink after the successful launch of 60 satellites

Sugandha Lahoti
27 May 2019
3 min read
After the successful launch of Elon Musk’s mammoth space mission, Starlink last week, the company has unveiled a brand new website with more details on the Starlink commercial satellite internet service. Starlink Starlink sent 60 communications satellites to the orbit which will eventually be part of a single constellation providing high speed internet to the globe. SpaceX has plans to deploy nearly 12,000 satellites in three orbital shells by the mid-2020s, initially placing approximately 1600 in a 550-kilometer (340 mi)-altitude area. The new website gives a few glimpses of how Starlink’s plan looks like such as including the CG representation of how the satellites will work. These satellites will move along their orbits simultaneously, providing internet in a given area. They have also revealed more intricacies about the satellites. Flat Panel Antennas In each satellite, the signal is transmitted and received by four high-throughput phased array radio antennas. These antennas have a flat panel design and can transmit in multiple directions and frequencies. Starlink Ion Propulsion system and solar array Each satellite carries a krypton ion propulsion system. These systems enable satellites to orbit raise, maneuver in space, and deorbit. There is also a singular solar array, singe for simplifying the system. Ion thrusters provide a more fuel-efficient form of propulsion than conventional liquid propellants. It uses Krypton, which is less expensive than xenon but offers lower thrust efficiency. Starlink Star Tracker and Autonomous collision avoidance system Star Tracker is Space X’s inbuilt sensors, that can tell each satellite’s output for precise broadband throughput placement and tracking. The collision avoidance system uses inputs from the U.S. Department of Defense debris tracking system, reducing human error with a more reliable approach. Through this data it can perform maneuvers to avoid collision with space debris and other spacecrafts. Per Techcrunch, who interviewed a SpaceX representative, “the debris tracker hooks into the Air Force’s Combined Space Operations Center, where trajectories of all known space debris are tracked. These trajectories are checked against those of the satellites, and if a possible collision is detected the course changes are made, well ahead of time.” Source: Techcrunch More information on Starlink (such as the cost of the project, what ground stations look like, etc) is yet unknown. Till that time, keep an eye on the Starlink’s website and this space for new updates. SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software” Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink
Read more
  • 0
  • 0
  • 12046
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-red-hat-infrastructure-migration-solution-for-proprietary-and-siloed-infrastructure
Savia Lobo
24 Aug 2018
3 min read
Save for later

Red Hat infrastructure migration solution for proprietary and siloed infrastructure

Savia Lobo
24 Aug 2018
3 min read
Red Hat recently introduced its infrastructure migration solution to help provide an open pathway to digital transformation. Red Hat infrastructure migration solution provides an enterprise-ready pathway to cloud-native application development via Linux containers, Kubernetes, automation, and other open source technologies. It helps organizations to accelerate transformation by more safely migrating and managing workload to an open source infrastructure platform, thus reducing cost and speeding innovation. Joe Fernandes, Vice President, Cloud Platforms Products at Red Hat, said, “Legacy virtualization infrastructure can serve as a stumbling block too, rather than a catalyst, for IT innovation. From licensing costs to closed vendor ecosystems, these silos can hold organizations back from evolving their operations to better meet customer demand. We’re providing a way for enterprises to leapfrog these legacy deployments and move to an open, flexible, enterprise platform, one that is designed for digital transformation and primed for the ecosystem of cloud-native development, Kubernetes, and automation.” RedHat program consists of three phases: Discovery Session: Here, Red Hat Consulting will engage with an organization in a complimentary Discovery Session to better understand the scope of the migration and document it effectively. Pilot Migrations: In this phase, an open source platform is deployed and operationalized using Red Hat’s hybrid cloud infrastructure and management tooling. Pilot migrations are carried out to demonstrate typical approaches, establish initial migration capability, and define the requirements for a larger scale migration. Migration at scale: In this phase, IT teams are able to migrate workloads at scale. Red Hat Consulting also aids in better streamline operations across virtualization pool, and navigate complex migration cases. Post the Discovery Session, recommendations are provided for a more flexible open source virtualization platform based on Red Hat technologies. These include: Red Hat Virtualization offers an open software-defined infrastructure and centralized management platform for virtualized Linux and Windows workloads. It is designed to empower customers with greater efficiency for traditional workloads, along with creating a launchpad for cloud-native and container-based application innovation. Red Hat OpenStack Platform is built on the enterprise-grade backbone of Red Hat Enterprise Linux. It helps users to build an on-premise cloud architecture that provides resource elasticity, scalability, and increased efficiency. Red Hat Hyperconverged Infrastructure is a portfolio of solutions that includes Red Hat Hyperconverged Infrastructure for both Virtualization and Cloud. Customers can use it to integrate compute, network and storage in a form factor designed to provide greater operational and cost efficiency. Using the new migration capabilities based on Red Hat’s management technologies, including Red Hat Ansible Automation, new workloads can be delivered in an automated fashion with self-service. These can also enable IT to more quickly re-create workload across hybrid and multi-cloud environment. Read more about the Red Hat infrastructure migration solution on RedHat’s official blog. Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 11813

article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 11578

article-image-google-ibm-redhat-and-others-launch-istio-1-0-service-mesh-for-microservices
Savia Lobo
01 Aug 2018
3 min read
Save for later

Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices

Savia Lobo
01 Aug 2018
3 min read
Istio, an open-source platform that connects, manages and secures microservices announced its version 1.0. Istio provides service mesh for microservices from Google, IBM, Lyft, Red Hat, and other collaborators from the open-source community. What’s Istio? Popularly known as a service mesh, Istio collects logs, traces, and telemetry and then adds security and policy without embedding client libraries. Istio also acts as a platform which provides APIs that allows integration with systems for logging, telemetry, and policy. Istio also helps in measuring the actual traffic between services including requests per second, error rates, and latency. It also generates a dependency graph to know how services affect one another. Istio offers a helping hand to one’s DevOps team by providing them with tools to run distributed apps smoothly. Here’s a list of what Istio does for your team: Performs Canary rollouts for allowing the DevOps team to smoke-test any new build and ensure a good build performance. Offers fault-injection, retry logic and circuit breaking so that DevOps teams can perform more testing and change network behavior at runtime to keep applications up and running. Istio adds security. It can be used to layer mTLS on every call, adding encryption-in-flight with an ability to authorize every single call on one’s cluster and mesh. What’s new in Istio 1.0? Multi-cluster support for Kubernetes Multiple Kubernetes clusters can now be added to a single mesh, enabling cross-cluster communication and consistent policy enforcement. The multi-cluster support is now in beta. Networking APIs now in beta Networking APIs that enable fine-grained control over the flow of traffic through a mesh are now in Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to control the network topology and meet access security requirements at the edge. Mutual TLS can be easily rolled out incrementally without updating all clients Mutual TLS can now be rolled out incrementally without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments. Istio’s mixer configuration has a support to develop out-of-process adapters Mixer now has support for developing out-of-process adapters. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler. Updated authorization policies Authorization policies which control access to services are now entirely evaluated locally in Envoy increasing their performance and reliability. Recommended Install method Helm chart installation is now the recommended install method offering rich customization options to adopt Istio on your terms. Istio 1.0 also includes performance improvement parameters such as continuous regression testing, large-scale environment simulation, and targeted fixes. Read more in detail about Istio 1.0 in its official release notes. 6 Ways to blow up your Microservices! How to build Dockers with microservices How to build and deploy Microservices using Payara Micro  
Read more
  • 0
  • 0
  • 11513

article-image-google-cloud-introduces-traffic-director-beta-a-networking-management-tool-for-service-mesh
Amrata Joshi
12 Apr 2019
2 min read
Save for later

Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh

Amrata Joshi
12 Apr 2019
2 min read
This week, the team at Google Cloud announced the Beta version of Traffic Director, a networking management tool for service mesh, at the Google Cloud Next. Traffic Director Beta will help network managers understand what’s happening in their service mesh. Service mesh is a network of microservices that creates the applications and the interactions between them. Features of Traffic Director Beta Fully managed with SLA Traffic Director’s production-grade features have 99.99% SLA. Users don’t have to worry about deploying and managing the control plane. Traffic management With the help of Traffic Director, users can easily deploy everything from simple load balancing to advanced features like request routing and percentage-based traffic splitting. Build resilient services Users can keep their service up and running by deploying it across multiple regions as VMs or containers. Traffic Director can be used for delivering global load balancing with automatic cross-region overflow and failover. With Traffic Director users can deploy their service instances in multiple regions while requiring only a single service IP. Scaling Traffic Director handles the growth in deployments and it manages to scale for larger services and installations. Traffic management for open service proxies This management tool provides a GCP (Google Cloud Platform)-managed traffic management control plane for xDSv2-compliant open service proxies like Envoy. Compatible with VMs and containers Users can deploy their Traffic Director-managed VM service and container instances with the help of managed instance groups and network endpoint groups. Supports request routing policies This tool supports routing features like traffic splitting and enables use cases like canarying, URL rewrites/redirects, fault injection, traffic mirroring, and advanced routing capabilities that are based on header values such as cookies. To know more about this news, check out Google Cloud's official page. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more
Read more
  • 0
  • 0
  • 11439
article-image-amazon-may-be-planning-to-move-from-oracle-by-2020
Natasha Mathur
07 Aug 2018
3 min read
Save for later

Amazon may be planning to move from Oracle by 2020

Natasha Mathur
07 Aug 2018
3 min read
Amazon is reportedly working towards shifting its business away from Oracle’s database software by 2020 as per the CNBC report last week. In fact, according to the CNBC report, Amazon has already started to transfer most of its infrastructure internally to Amazon Web services and will shift entirely by the first quarter of 2020. Both the organizations, Amazon and Oracle, have been fierce competitors for a really long time, comparing whose products and services are more superior. But, Amazon has also been Oracle’s major customer. It has been leveraging Oracle’s database software for many years to power its infrastructure for retail and cloud businesses. Oracle’s database has been a market standard for many since the 1990s. It is one of the most important products for many organizations across the globe as it provides these businesses with databases to run their operations on. Despite having started off its business with Oracle, Amazon launched AWS back in 2006, taking Oracle SQL based database head on and stealing away many of Oracle’s customers. This is not the first time news about Amazon making its shift from Oracle has stirred up. Amazon’s plans to move away from Oracle Technology came to light back in January this year. But, as per the statement issued to CNBC on August 1, a spokesperson for Oracle mentioned that Amazon had "spent hundreds of millions of dollars on Oracle technology" in the past many years. In fact, Larry Ellison, CEO at Oracle, mentioned during Oracle’s second quarter fiscal 2018 earnings call that “A company you’ve heard of just gave us another $50 million this quarter to buy Oracle database and other Oracle technology. That company is Amazon.” The recent news of Amazon’s migration has come at a time of substantial growth for AWS. AWS saw 49% growth rate in Q2 2018, while, Oracle’s business has remained stagnant for four years, thereby, putting more pressure on the company. There’s also been an increase in Amazon’s “backlog revenue” ( i.e. the total value of the company's future contract obligations) as it has reached $16 billion from $12.4 billion in May. In addition to this, AWS has been consistently appearing as “Leader” in the Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IAAS) since past six years. There have also been regular word wars between Larry Ellison and Andy Jassy, CEO AWS, over each other’s performance during conference keynotes and analyst calls. Andy Jassy, CEO at AWS took a shot at Oracle last year during his keynote at the AWS big tech conference. He said “Oracle overnight doubled the price of its software on AWS. Who does that to their customers? Someone who doesn't care about the customer but views them as a means to their financial ends”. Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. With all the other cloud services such as AWS, Microsoft, Google, Alibaba and IBM catching up, Oracle seems to be losing the database race. So, if Amazon does decide to phase out Oracle, then Oracle will have to step up its game big time to gain back the cloud market share. Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer  
Read more
  • 0
  • 0
  • 10548

article-image-ipv6-support-to-be-automatically-rolled-out-for-most-netify-application-delivery-network-users
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

IPv6 support to be automatically rolled out for most Netify Application Delivery Network users

Melisha Dsouza
29 Nov 2018
3 min read
Earlier this week,, Netlify announced in a blog post that the company has begun the rollout of IPv6 support on the Netlify Application Delivery Network. Netlify has adopted the IPv6 support as a solution to the IPv4 address capacity problem. This news comes right after the announcement that Netlify raised $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management. Netlify provides developers with an all-in-one workflow to build, deploy, and manage modern web projects. Their ‘Application Delivery Network’ is a new platform for the web and will assist web developers in building newer web-based applications. There is no need for developers to setup or manage servers as all content and applications will be created directly on a global network. It removes the dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. IP addresses are assigned to every server connected to the internet. Netifly explain how  traditionally used IPv4 address pool is getting smaller with continuous expansion of the internet. This is where IPv6 steps in. IPv6 defines an IP address as a 128-bit entity instead of integer-based IPv4 addresses. For example, IPv4 defines an address as 167.99.129.42, and IPv6 address would instead look like 2001:0db8:85a3:0000:0000:8a2e:0370:7334. Even though the IPv6 format is complex to remember, it creates vastly more possible addresses to help support the rapid growth of the internet. In addition to efficient routing and packet processing, IPv6 also accounts for better security as compared to IPv4. This is because IPSec, which provides confidentiality, authentication and data integrity, is baked into IPv6. According to the blog post, users that are serving their sites on a subdomain of netlify.com or using custom domains registered from an external domain registrar, will automatically begin using IPv6 on their ADN. Customers using Netlify for DNS management, can go to the Domains section on the dashboard and enable IPv6 for each of their domains. Customers having a complex or bespoke DNS configuration or enterprise customers using Netlify’s Enterprise ADN infrastructure, are advised to contact Netlify’s support team or their account manager to ensure that their specific configuration is migrated to IPv6 appropriately. Netlify’s users have received this news well: https://twitter.com/sethvargo/status/1067152518638116864 Hacker News is also flooded with positive comments for Netlify: Netlify has starting off on the right foot, it would be interesting to see what customers think after implementing the IPv6 for their Netlify ADN. Head over to Netlify’s blog for more insights on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! libp2p: the modular P2P network stack by IPFS for better decentralized computing  
Read more
  • 0
  • 0
  • 10435

article-image-an-early-access-to-sailfish-3-is-here
Savia Lobo
02 Nov 2018
3 min read
Save for later

An early access to Sailfish 3 is here!

Savia Lobo
02 Nov 2018
3 min read
This week, Sailfish OS announced the early release of its third generation release i.e Sailfish 3 software and has made it available to all Sailfish users who had opted-in for the early access updates. Sami Pienimäki, CEO & Co-founder of Jolla Ltd, in his release post said, “we are expanding the Sailfish community program, “Sailfish X“, with a few of key additions next week: on November 8 we release the software for various Sony Xperia XA2 models.” Why the name ‘Sailfish’? Sailfish 3.0.0 is named after the legendary National Park Lemmenjoki in Northern Lapland. We’ve always aimed at respecting our Finnish roots in naming our software versions: previously we’ve covered lakes and rivers, and now we’re set to explore our beautiful national parks. Sailfish 3 will be rolled out in phases, and thus many features are deployed in several software releases. The first phase is Sailfish 3.0.0 is available as an early access version since October 31st. The customer release is expected to roll out soon in the coming weeks. Further, the next release 3.0.1 is expected to release in early December. Security and Corporate features of Sailfish 3 Sailfish 3 has a deeper level of security, which makes it a go-to option for various corporate and organizational solutions, and other use cases. Some of the new enhanced features in Sailfish 3 include Mobile Device Management (MDM), fully integrated VPN solutions, enterprise WiFi, data encryption, and better and faster performance. It also offers a full support for regional infrastructures including steady releases & OS upgrades, local hosting, training, and a flexible feature set to support specific customer needs. User experience highlights for Sailfish 3.0.0 New Top Menu: quick settings and shortcuts can now be accessed anywhere Light ambiances: new fresh look for Sailfish OS Data encryption: memory card encryption is now available. Device file system encryption is coming in next releases. New Keyboard gestures: quickly change keyboard layouts with one swipe USB On-The-Go storage: connect to different kinds of external storage devices Camera improvements: new lock screen camera roll allows you to review the photos you just took without unlocking the device Further, due to the rewritten way to launch apps and load views, one can achieve much better UI performance in Sailfish 3. Sami mentions, “You can start to enjoy the faster Sailfish already now with the 3.0.0 release and the upcoming major Qt upgrade will further improve the responsiveness & performance resulting to 50% better overall performance.” To know more about Sailfish 3 in detail, visit its official website. GitHub now allows issue transfer between repositories; a public beta version Introducing Howler.js, a Javascript audio library with full cross-browser support BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al
Read more
  • 0
  • 0
  • 10092
article-image-the-major-dns-blunder-at-microsoft-azure-affects-office-365-one-drive-microsoft-teams-xbox-live-and-many-more-services
Amrata Joshi
03 May 2019
3 min read
Save for later

The major DNS blunder at Microsoft Azure affects Office 365, One Drive, Microsoft Teams, Xbox Live, and many more services

Amrata Joshi
03 May 2019
3 min read
It seems all is not well at Microsoft post yesterday’s outage as the Microsoft's Azure cloud been up and down globally because of a DNS configuration issue. This outage that started at 1:20 pm yesterday, lasted for more than an hour which ended up affecting Microsoft’s cloud services, including Office 365, One Drive, Microsoft Teams, Xbox Live, and many others that are used by Microsoft’s commercial customers. Due to the networking connectivity errors in Microsoft Azure even the third-party apps and sites running on Microsoft’s cloud got affected. Meanwhile, around 2:30 pm, Microsoft started gradually recovering Azure regions one by one. Though Microsoft is yet to completely troubleshoot this major issue and has already warned that it might take some time to get everyone back up and running. But this isn’t the first time that DNS outage has affected Azure. This year in January, a few customers' databases had gone missing, which affected a number of Azure SQL databases that utilize custom KeyVault keys for Transparent Data Encryption (TDE). https://twitter.com/AzureSupport/status/1124046510411460610 The Azure status page reads, "Customers may experience intermittent connectivity issues with Azure and other Microsoft services (including M365, Dynamics, DevOps, etc)." The Microsoft engineers found out that an incorrect name server delegation issue affected DNS resolution, network connectivity, and that affected the compute, storage, app service, AAD, and SQL database resources. Even on the Microsoft 365 status page, Redmond's techies have blamed an internal DNS configuration error for the downtime. Also, during the migration of the DNS system to Azure DNS, some domains for Microsoft services got incorrectly updated. The good thing is that no customer DNS records were impacted during this incident, also the availability of Azure DNS remained at 100% throughout this incident. Only records for Microsoft services got affected due to this issue. According to Microsoft, the broken systems have been fixed and the three-hour outage has come to an end and the Azure's network infrastructure will soon get back to normal. https://twitter.com/MSFT365Status/status/1124063490740826133 Users have reported issues with accessing the cloud service and are complaining. A user commented on HackerNews, “The sev1 messages in my inbox currently begs to differ. there's no issue maybe with the dns at this very moment but the platform is thoroughly fucked up.” Users are also questioning the reliability of Azure. Another comment reads, “Man... Azure seems to be an order of magnitude worse than AWS and GCP when it comes to reliability.” To know more about the status of the situation, check out Microsoft’s post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records    
Read more
  • 0
  • 0
  • 9411

article-image-spacex-delays-launch-of-starlink-its-commercial-satellite-internet-service-for-the-second-time-to-update-satellite-software
Bhagyashree R
17 May 2019
4 min read
Save for later

SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software”

Bhagyashree R
17 May 2019
4 min read
Update: On 23rd May, SpaceX successfully launched 60 satellites of the company’s Starlink constellation to orbit after a launch from Cape Canaveral. “This is one of the hardest engineering projects I’ve ever seen done, and it’s been executed really well,” said Elon Musk, SpaceX’s founder and CEO, during a press briefing last week. “There is a lot of new technology here, and it’s possible that some of these satellites may not work, and in fact a small possibility that all the satellites will not work. “We don’t want to count anything until it’s hatched, but these are, I think, a great design and we’ve done everything we can to maximize the probability of success,” he said. On Wednesday night, SpaceX was all set to send a Falcon 9 rocket into the space carrying the very first 60 satellites for its new Starlink commercial satellite internet service. And, while everyone was eagerly waiting for the launch webcast, the heavy winds ruined the show for everyone. SpaceX rescheduled the launch at 10:30 pm EDT from Florida's Cape Canaveral Air Force Station, but it has canceled the launch yet again citing the reason as software issues. The launch is now delayed for about a week. https://twitter.com/SpaceX/status/1129181397262843906 Elon Musk’s plans for SpaceX This launch of 60 satellites, weighing 227 kgs each, is actually the first step into creating Elon Musk’s plan of a huge Starlink constellation. He eventually aims to build up a mega constellation of 12,000 satellites. If everything goes well today, Falcon 9 will make a landing on the “Of Course I Still Love You” drone-ship in the Atlantic Ocean. After 1 hour and 20 minutes of the launch, the second stage will begin when the Starlink satellites will start self-deploying. On Wednesday, Musk on a teleconference with reporters revealed a bunch of details about his satellite internet service. Revealing the release mechanism behind the satellites he said that each of the satellites does not have their own release mechanism. Instead, the Falcon rocket’s upper stage will begin a very slow rotation and each one will be released in turn with a different amount of rotational inertia. "It will almost seem like spreading a deck of cards on a table,” he adds. Once the deployment happens, the satellites will start powering up their ion drives and open their solar panels. They will then move to an altitude of 550 km under their own power. This is a new approach for delivering commercial satellite internet. Other satellite internet services such as Viasat depend on few big satellites in geostationary orbit over 22,000 miles (35,000 kilometers) above Earth as opposed to 550 km in this case. Conventional internet services can suffer from high levels of latency because the signals have to travel a huge distance between the satellite and earth. Starlink aims to bring the satellites to the lower orbit to minimize the distance hence resulting in less lag time. However, the catch here is that as these satellites are closer to the Earth they are not able to cover a large surface area and hence a much greater number of them will be required to cover the whole planet. Though his plans look promising, Musk does not claim of everything going well. He, in the teleconference, said, "This is very hard. There is a lot of new technology, so it's possible that some of these satellites may not work. There's a small possibility that all of these satellites will not work." He further added that six more launches of a similar payload will be required before the service even begins to offer a “minor” coverage. You willl be able to see the launch webcast hauere or also on lthe official website: https://www.youtube.com/watch?time_continue=8&v=rT366GiQkP0 Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AIation
Read more
  • 0
  • 0
  • 9298
Modal Close icon
Modal Close icon