Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 13978

article-image-atlassian-sells-hipchat-ip-to-slack
Richard Gall
30 Jul 2018
3 min read
Save for later

Atlassian sells Hipchat IP to Slack

Richard Gall
30 Jul 2018
3 min read
Before Slack was a thing, Atlassian's HipChat was one of a number of internal messaging tools trying to beat the competition in a nascent market place. However, with Slack today dominating the messaging app landscape, Atlassian has given in. The Australian company has announced it will be selling the HipChat IP to and discontinuing the service in February 2019. The financial details of the deal haven't been disclosed. However, Slack CEO Stewart Butterfield did reveal that Atlassian will be making a "small but symbolically important investment in Slack" in a tweet on Thursday July 26. https://twitter.com/stewart/status/1022574806623895553 The deal is being presented as a partnership rather than a straightforward acquisition. On the Atlassian blog, for example, Joff Redfern, VP of Product Management was keen to stress that this was a partnership: "We have always had a spirited yet friendly competition with Slack (and have even sent each other congratulatory cookies and cake!). Across our product portfolio, we have long shared many integrations, which hundreds of thousands of teams use every day. Through this new partnership, both companies will lean into building better integrations together and more sharply define the modern workplace experience for companies everywhere." As well as Hipchat, Slack is also purchasing the IP for Stride, another messaging app released in 2017 by Atlassian in September 2017. Stride was initially designed to succeed Hipchat, but Redfern explained that Slack's dominance of the current market meant this step simply made sense for Atlassian. "While we’ve made great early progress with Stride, we believe the best way forward for our customers and for Atlassian is to enter into a strategic partnership with Slack and no longer offer our own real-time communications products." Hipchat Server and Hipchat Datacenter will also be discontinued. Conscious that this could lead to some real migration challenges, Atlassian has put together a detailed migration guide. Who wins in the Slack and Atlassian deal? The truth is that both parties have struck a good deal here (financial details notwithstanding). Atlassian, as it acknowledges simply couldn't compete in a market where Slack seems to dominate. For Slack, too, the deal comes at a good time. Microsoft's Teams App is set to replace Skype for Business in Microsoft's Office 365 suite. A free version, released earlier this month which doesn't requite an Office 365 subscription could also be some cause for concern for Slack. The one group that loses: users Although the deal might work out well for both Slack and Atlassian, there was considerable anger on Atlassian's community forums. One asked "What the hell are on-premise customers supposed to do?! We just implemented and invested in this app! We're building apps in-house for our own purposes. We have zero ability to use Cloud services of ANY type. You are offering ZERO alternatives." One user outlined his frustrations with what it means for migration: "We needed a chat platform. We did research and after a long time landed on hipchat. We had to pull teeth to get users to move to it. We transitioned bots and automations over to hipchat."
Read more
  • 0
  • 0
  • 13959

article-image-facebook-and-microsoft-announce-open-rack-v3-to-address-the-power-demands-from-artificial-intelligence-and-networking
Bhagyashree R
18 Mar 2019
3 min read
Save for later

Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking

Bhagyashree R
18 Mar 2019
3 min read
From the past few months, Facebook and Microsoft together have been working on a new architecture based on the Open Rack standards. Last week, Facebook announced a new initiative that aims to build uniformity around the Rack & Power design. The Rack & Power Project Group is responsible for setting the rack standards designed for data centers, integrating the rack into the data center infrastructure. This project comes under a larger initiative started by Facebook called Open Compute Project. Why a new version of Open Rack is needed? Today, the industry is turning to AI and ML systems to solve several difficult problems. Though these systems are helpful, at the same time, they require increased power density at both the component level and the system level. The ever-increasing bandwidth speed demand for networking systems has also led to similar problems. So, in order to improve the overall system performance, it is important to get memory, processors, and system fabrics as close together as possible. This new architecture of Open Rack will come with greater benefits as compared to the current version, Open Rack V2. “For this next version, we are collaborating to create flexible, interoperable, and scalable solutions for the community through a common OCP architecture. Accomplishing this goal will enable wider adoption of OCP technologies across multiple industries, which will benefit operators, solution providers, original design manufacturers, and configuration managers,” shared Facebook in the blog post. What are the goals of this initiative? This new initiative aims to achieve the following goals A common OCP rack architecture to enable greater sharing between Microsoft and Facebook. Creating a flexible frame and power infrastructure that will support a wide range of solutions across the OCP community Apart from the features need by Facebook, this architecture will come with additional features for the larger community, including physical security for solutions deployed in co-location facilities. New thermal solutions will be introduced such as liquid cooling manifolds, door-based heat exchanges, and defined physical and thermal interfaces. These solutions are currently under development by the Advanced Cooling Solutions sub-project. Introducing new power and battery backup solutions that scale across different rack power levels and also accommodate different power input types. To know more in detail, check out the official announcement on Facebook. Two top executives leave Facebook soon after the pivot to privacy announcement Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 13958

article-image-linux-foundation-launches-the-acumos-al-project-to-make-ai-accessible
Savia Lobo
08 May 2018
2 min read
Save for later

Linux Foundation launches the Acumos Al Project to make AI accessible

Savia Lobo
08 May 2018
2 min read
The Linux Foundation recently launched the Acumos Al Project with an aim to make AI accessible to all. Acumos AI is a platform and an open source framework, to easily build, share and deploy Artificial Intelligence, Machine Learning, and Deep learning applications. As a part of the LF Deep Learning Foundation, Acumos strives to make these AI, ML and DL technologies available to developers and data scientists everywhere. It caters to a broad range of business use cases, which include network analytics, customer care, field service and equipment repair, healthcare analytics, network security and advanced video services, and many more. Let’s have a look at what Acumos AI has in store. The Acumos AI Project, Packages tool kits such as TensorFlow and SciKit Learn and models with a common API that allows them to seamlessly connect Allows easy onboarding and training of models and tools Supports a variety of popular software languages, including Java, Python, and R Leverages modern microservices and containers in order to package and export production-ready AI applications as Docker files Includes a federated AI Model Marketplace, which is a catalog of AI models contributed by the community that can be securely shared Benefits of Acumos AI It provides a standardized platform, an easy export, and Docker-file deployment to any environment, including major public clouds, making stand-up and maintenance a breeze It has a simplified toolkit and model onboarding, which helps data scientists focus on building great AI models rather than maintaining infrastructure. The Acumos AI comprises of a Visual design editor, a drag-and-drop application design, and a chaining feature, where applications can be chained to create an array of AI services. These enable end users to deploy complicated AI apps for training and testing within minutes. Read the Acumos AI whitepaper to know more about the Acumos AI Project in detail. Kali Linux 2018.2 released How to implement In-Memory OLTP on SQL Server in Linux What to expect from upcoming Ubuntu 18.04 release
Read more
  • 0
  • 0
  • 13952

article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 13909

article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 13878
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 13866

article-image-jfrog-devops-artifact-management-platform-bags-165-million-series-d-funding
Sugandha Lahoti
05 Oct 2018
2 min read
Save for later

JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding

Sugandha Lahoti
05 Oct 2018
2 min read
JFrog the DevOps based artifact management platform has announced a $165 million Series D funding, yesterday. This funding round was led by Insight Venture Partners. The secured funding is expected to drive JFrog product innovation, support rapid expansion into new markets, and accelerate both organic and inorganic growth. Other new investors included Spark Capital and Geodesic Capital, as well as existing investors including Battery Ventures, Sapphire Ventures, Scale Venture Partners, Dell Technologies Capital and Vintage Investment Partners. Additional JFrog investors include JFrog Gemini VC Israel, Qumra Capital and VMware. JFrog transforms the way software is updated by offering an end-to-end, universal, highly-available software release platform. This platform is used for storing, securing, monitoring and distributing binaries for all technologies, including Docker, Go, Helm, Maven, npm, Nuget, PyPi, and more. As of now, according to the company, more than 5 million developers use JFrog Artifactory as their system of record when they build and release software. It also supports multiple deployment options, with its products available in a hybrid model, on-premise, and across major cloud platforms: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The announcement comes on the heels of Microsoft’s $7.5 billion purchase of coding-collaboration site GitHub earlier this year. Since its Series C funding round in 2016, the company has seen more than 500% sales growth and expanded its reach to over 4,500 customers, including more than 70% of the Fortune 100. It continues to add 100 new commercial logos per month and supports the world’s open source communities with its Bintray binary hub. Bintray powers 700K community projects distributing over 5.5M unique software releases that generate over 3 billion downloads a month. Read more about the announcement on JFrog official press release. OmniSci, formerly MapD, gets $55 million in series C funding. Microsoft’s GitHub acquisition is good for the open source community. Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency”
Read more
  • 0
  • 0
  • 13817

article-image-microsoft-introduces-public-preview-of-azure-dedicated-host-and-updates-its-licensing-terms
Amrata Joshi
07 Aug 2019
3 min read
Save for later

Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms

Amrata Joshi
07 Aug 2019
3 min read
Last week, Microsoft introduced a preview of Azure Dedicated Host that provides a physical server hosted on Azure and is not shared with other customers. The company has made few licensing changes that will make the Microsoft software bit expensive for the AWS, Google and Alibaba customers. Currently, the dedicated host is available in two specifications, first is Type 1 based on a 2.3GHz Intel Xeon E5-2673 v4 (Broadwell) and has 64 vCPUs (Virtual CPUs). It costs $4.055 or $4.492 per hour depending on the RAM (256GB or 448GB). Another is Type 2 based on the Xeon Platinum 8168 (Skylake) and comes with 72 vCPUs and 144GB RAM, it costs $4.039 per hour. These prices don’t include the licensing costs and it seems Microsoft is trying to bring changes in this area.  Last week, the team at Microsoft announced that they will modify their licensing terms related to outsourcing rights and dedicated hosted cloud services on October 1, 2019. The team further stated that this particular change won’t impact the use of existing software versions under the licenses that are purchased before October 1, 2019. The official post reads, “Currently, our outsourcing terms give on-premises customers the option to deploy Microsoft software on hardware leased from and managed by traditional outsourcers.”  The team is updating the outsourcing terms for Microsoft on-premises licenses in order to clarify the difference between on-premise/traditional outsourcing and cloud services. Additionally, they are planning to create more consistent licensing terms across multi-tenant and dedicated hosted cloud services. The customers will either have to rent the software via SPLA (Service Provider License Agreement) or they will have to purchase a license with software assurance, an annual service charge. From October 1, on-premises licenses purchased without Software Assurance and mobility rights will no longer be deployed with dedicated hosted cloud services that are offered by the public cloud providers like Microsoft, Alibaba, Amazon (including VMware Cloud on AWS), and Google. According to Microsoft, they will then be referred to as “Listed Providers.” These changes won’t be applicable to other providers, Services Provider License Agreement (SPLA) program as well as to the License Mobility for Software Assurance benefit except for expanding this benefit to cover dedicated hosted cloud services. Customers will benefit from licensed Microsoft products on a dedicated cloud platform Customers will be able to license Microsoft products on dedicated hosted cloud services from the Listed Providers. Users can continue to deploy and use the software under their existing licenses on Listed Providers’ servers that are dedicated to them. But they will noy be able to add workloads under licenses that are acquired on or after October 1, 2019.  Users will be able to use products through the purchase of cloud services directly from the Listed Provider, after 1st October. In case, they have licenses with Software Assurance, then it can be used with the Listed Providers’ dedicated hosted cloud services under License Mobility or Azure Hybrid Benefit rights. These changes aren’t applicable to deployment and use of licenses outside of a Listed Provider’s data center. But these changes are applicable to both first and third-party offerings on a dedicated hosted cloud service from a Listed Provider. To know more about this news, check out the official post by Microsoft. CERN plans to replace Microsoft-based programs with an affordable open-source software Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream?  
Read more
  • 0
  • 0
  • 13803

article-image-microsoft-releases-windows-10-insider-build-17682
Natasha Mathur
01 Jun 2018
3 min read
Save for later

Microsoft releases Windows 10 Insider build 17682!

Natasha Mathur
01 Jun 2018
3 min read
Microsoft announced today that they are releasing Windows 10 Insider build 17682 from the RS5 branch today. The new release includes sets improvements, wireless projection experience, Microsoft Edge improvements, and RSAT along with other updates and fixes. Major improvements and updates Sets Improvements New tab page has been updated which makes it easy to launch apps. On clicking the plus button in a Sets window, apps are visible in the frequent destinations list. The all apps list have been integrated into the new tab page to make it easy to browse apps instead of using the search box. Apps supporting Sets when clicked will launch into a new tab. In case you select News Feed, just select the “Apps” link which is next to “News Feed”, this will help switch to the all apps list. Managing Wireless Projection Experience Earlier, there were disturbances during wireless projection for users when the session was started through file explorer or an app. This has been fixed now with Windows 10 Insider build 17682 as there’ll be a control banner at the top of a screen during a session. The control banner informs you about your connection state, lets you tune the connection as well as helps with quick disconnect or reconnect to the same sink. Tuning is done with the help of settings gear. Screen to screen latency is optimized based on the following scenarios: Game mode makes gaming over a wireless connection possible by minimizing screen to screen latency. Video mode ensures smooth playback of the videos without any glitches on the big screen by increasing screen to screen latency. Productivity mode helps to balance between game mode and video mode. Screen to screen latency is responsive enough so that typing feels natural while ensuring limited glitch in the videos. All connections start off in the productivity mode. Improvements in Microsoft Edge for developers With the latest Windows 10 insider build 17682, there is unprefixed support for the new Web Authentication API (WebAuthN). Web Authentication helps provide a scalable and interoperable solution. It helps with replacing passwords with stronger hardware-bound credentials. Microsoft Edge users can use Windows Hello (via PIN or biometrics). They can also use other external authenticators namely FIDO2 Security Keys or FIDO U2F Security Keys. This helps authenticate the websites securely. RSAT available on demand No need to manually download RSAT on every upgrade. Select the “Manage Optional features” in Settings. Then click on “Add a feature” option which will provide you with all the listed RSAT components. You can pick the components you want, and on next upgrade, Windows will ensure that all those components automatically persist the upgrade. More information about other known issues and improvements is on the Window’s Blog. Microsoft Cloud Services get GDPR Enhancements Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint
Read more
  • 0
  • 0
  • 13773
article-image-mariadb-announces-the-release-of-mariadb-enterprise-server-10-4
Amrata Joshi
12 Jun 2019
4 min read
Save for later

MariaDB announces the release of MariaDB Enterprise Server 10.4

Amrata Joshi
12 Jun 2019
4 min read
Yesterday, the team at MariaDB announced the release of MariaDB Enterprise Server 10.4, which is code named as “restful nights”. It is a hardened and secured server which is also different from MariaDB’s Community Server. This release is focused on solving enterprise customer needs, offering them greater reliability, stability as well as long-term support in production environments. MariaDB Enterprise Server 10.4 and its backported versions will be available to customers by the end of the month as part of the MariaDB Platform subscription. https://twitter.com/mariadb/status/1138737719553798144 The official blog post reads, “For the past couple of years, we have been collaborating very closely with some of our large enterprise customers. From that collaboration, it has become clear that their needs differ vastly from that of the average community user. Not only do they have different requirements on quality and robustness, they also have different requirements for features to support production environments. That’s why we decided to invest heavily into creating a MariaDB Enterprise Server, to address the needs of our customers with mission critical production workloads.” MariaDB Enterprise Server 10.4 comes with added functionality for enterprises that are running MariaDB at scale in production environments. It also involves new levels of testing and is shipped in by default secure configuration. It also includes the same features of MariaDB Server 10.4, including bitemporal tables, an expanded set of instant schema changes and a number of improvements to authentication and authorization (e.g., password expiration and automatic/manual account locking) Max Mether, VP of Server Product Management, MariaDB Corporation, wrote in an email to us, “The new version of MariaDB Server is a hardened database that transforms open source into enterprise open source.” He further added, “We worked closely with our customers to add the features and quality they need to run in the most demanding production environments out-of-the-box. With MariaDB Enterprise Server, we’re focused on top-notch quality, comprehensive security, fast bug fixes and features that let our customers run at internet-scale performance without downtime.” James Curtis, Senior Analyst, Data Platforms and Analytics, 451 Research, said, “MariaDB has maintained a solid place in the database landscape during the past few years.” He added, “The company is taking steps to build on this foundation and expand its market presence with the introduction of MariaDB Enterprise Server, an open source, enterprise-grade offering targeted at enterprise clients anxious to stand up production-grade MariaDB environments.” Reliability and stability MariaDB Enterprise Server 10.4 offers reliability and stability that is required for production environments. In this server, even bugs are fixed that further help in maintaining reliability. The key enterprise features are backported for the ones running earlier versions of MariaDB Server, and provide long-term support. Security Unsecured databases are most of the times the reason for data breaches. But the MariaDB Enterprise Server 10.4is configured with security settings to support enterprise applications. All non-GA plugins will be disabled by default in order to reduce the risks incurred when using unsupported features. Further, the default configuration is changed to enforce strong security, durability and consistency. Enterprise backup MariaDB Enterprise Server 10.4 offers enterprise backup that brings operational efficiency to customers with large databases and further breaks up the backups into non-blocking stages. So this way, writes and schema changes can occur during backups than waiting for backup to complete. Auditing capabilities This server adds secure, stronger and easier auditing capabilities by logging all changes to the audit configuration. It also logs detailed connection information that gives the customers a comprehensive view of changes made to the database. End-to-end encryption It also offers end-to-end encryption for multi-master clusters where the transaction buffers are encrypted that ensure that the data is secure. https://twitter.com/holgermu/status/1138511727610478594 Learn more about this news on the official web page. MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies” MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool
Read more
  • 0
  • 0
  • 13716

article-image-sap-cloud-platform-is-now-generally-available-on-microsoft-azure
Savia Lobo
11 Jun 2018
3 min read
Save for later

SAP Cloud Platform is now generally available on Microsoft Azure

Savia Lobo
11 Jun 2018
3 min read
Microsoft stated that its recent addition, SAP Cloud Platform is now generally on its Azure Platform. The SAP cloud platform enables developers to build SAP applications and extensions using PaaS development platform along with integrated services. With the SAP platform becoming generally available, developers can now deploy Cloud Foundry-based SAP Cloud Platform on Azure. This is currently available in the West Europe region and Microsoft is working with SAP to enable more regions to use it in the months to come. With SAP HANA’s availability on Microsoft Azure, one can expect: Largest SAP HANA optimized VM size in the cloud Microsoft would be soon launching an Azure-M series, which will support large memory virtual machines with sizes up to 12 TB, which would be based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud. The M series will help customers to push the limits of virtualization in the cloud for SAP HANA. Availability of a range of SAP HANA certified VMs For customers who wish to use small instances, Microsoft also offers smaller M-series VM sizes. These range from 192 GB – 4 TB with 10 different VM sizes and extend Azure’s SAP HANA certified M-series VM. These smaller M-series offer on-demand and SAP certified instances with a flexibility to spin-up or scale-up in less time. It also offers instances to spin-down to save costs within a pay-as-you-go model available worldwide. Such a flexibility and agility is not possible with a private cloud or on-premises SAP HANA deployment. 24 TB bare-metal instance and optimized price per TB For customers that need a higher performance dedicated offering for SAP HANA, Microsoft now offers additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to their current configurations from 0.7TB to 20 TB. For customers who require more memory but the same number of cores, these configurations enable them to get a better price per TB deployed. A lot more options for SAP HANA in the cloud SAP HANA has 26 distinct offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB. It offers global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud. Microsoft Azure also enables customers, To extract insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines Azure Data Lake Store for hyper-scale data storage and Power BI An industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data. Read more about this news about SAP Cloud Platform on Azure, on Microsoft Azure blog. How to perform predictive forecasting in SAP Analytics Cloud Epicor partners with Microsoft Azure to adopt Cloud ERP New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL
Read more
  • 0
  • 0
  • 13714

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 13689
article-image-uber-open-sources-peloton-a-unified-resource-scheduler
Natasha Mathur
27 Mar 2019
2 min read
Save for later

Uber open-sources Peloton, a unified Resource Scheduler

Natasha Mathur
27 Mar 2019
2 min read
Earlier this month, Uber open-sourced Pelton, a unified resource scheduler that manages resources across distinct workloads. Pelton, first introduced in November last year, is built on top of Mesos. “By allowing others in the cluster management community to leverage unified schedulers and workload co-location, Peloton will open the door for more efficient resource utilization and management across the community”, states the Uber team. Peloton is designed for web-scale companies such as Uber that consist of millions of containers and tens of thousands of nodes. Peloton comes with advanced resource management capabilities such as elastic resource sharing, hierarchical max-min fairness, resource overcommits, and workload preemption. Peloton uses Mesos to aggregate resources from different hosts and then further launch tasks as Docker containers. Peloton also makes use of hierarchical resource pools to manage elastic and cluster-wide resources more efficiently. Before Peloton was released, each workload at Uber comprised its own cluster which resulted in various inefficiencies. However, with Peloton, mixed workloads can be colocated in shared clusters for better resource utilization. Peloton feature highlights Elastic Resource Sharing: Peloton supports hierarchical resource pools that help elastically share resources among different teams. Resource Overcommit and Task Preemption: Peloton helps with improving cluster utilization by scheduling workloads that use slack resources. Optimized for Big Data Workloads:  Support has been provided for advanced Apache Spark features such as dynamic resource allocation. Optimized for Machine Learning: There is support provided for GPU and Gang scheduling for TensorFlow and Horovod. High Scalability: Users can scale to millions of containers and tens of thousands of nodes. “Open sourcing Peloton will enable greater industry collaboration and open up the software to feedback and contributions from industry engineers, independent developers, and academics across the world”, states the Uber team. Uber and Lyft drivers strike in Los Angeles Uber and GM Cruise are open sourcing their Automation Visualization Systems Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts
Read more
  • 0
  • 0
  • 13651

article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 13603
Modal Close icon
Modal Close icon