Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 13978

article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 13909

article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 13878

article-image-sap-cloud-platform-is-now-generally-available-on-microsoft-azure
Savia Lobo
11 Jun 2018
3 min read
Save for later

SAP Cloud Platform is now generally available on Microsoft Azure

Savia Lobo
11 Jun 2018
3 min read
Microsoft stated that its recent addition, SAP Cloud Platform is now generally on its Azure Platform. The SAP cloud platform enables developers to build SAP applications and extensions using PaaS development platform along with integrated services. With the SAP platform becoming generally available, developers can now deploy Cloud Foundry-based SAP Cloud Platform on Azure. This is currently available in the West Europe region and Microsoft is working with SAP to enable more regions to use it in the months to come. With SAP HANA’s availability on Microsoft Azure, one can expect: Largest SAP HANA optimized VM size in the cloud Microsoft would be soon launching an Azure-M series, which will support large memory virtual machines with sizes up to 12 TB, which would be based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud. The M series will help customers to push the limits of virtualization in the cloud for SAP HANA. Availability of a range of SAP HANA certified VMs For customers who wish to use small instances, Microsoft also offers smaller M-series VM sizes. These range from 192 GB – 4 TB with 10 different VM sizes and extend Azure’s SAP HANA certified M-series VM. These smaller M-series offer on-demand and SAP certified instances with a flexibility to spin-up or scale-up in less time. It also offers instances to spin-down to save costs within a pay-as-you-go model available worldwide. Such a flexibility and agility is not possible with a private cloud or on-premises SAP HANA deployment. 24 TB bare-metal instance and optimized price per TB For customers that need a higher performance dedicated offering for SAP HANA, Microsoft now offers additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to their current configurations from 0.7TB to 20 TB. For customers who require more memory but the same number of cores, these configurations enable them to get a better price per TB deployed. A lot more options for SAP HANA in the cloud SAP HANA has 26 distinct offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB. It offers global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud. Microsoft Azure also enables customers, To extract insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines Azure Data Lake Store for hyper-scale data storage and Power BI An industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data. Read more about this news about SAP Cloud Platform on Azure, on Microsoft Azure blog. How to perform predictive forecasting in SAP Analytics Cloud Epicor partners with Microsoft Azure to adopt Cloud ERP New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL
Read more
  • 0
  • 0
  • 13714

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 13689

article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 13603
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-atlassian-overhauls-its-jira-software-with-customizable-workflows-new-tech-stack-and-roadmaps-tool
Sugandha Lahoti
19 Oct 2018
3 min read
Save for later

Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool

Sugandha Lahoti
19 Oct 2018
3 min read
Atlassian has completely revamped it’s traditional Jira software adding a simplified user experience, new third-party integrations, and a new product roadmaps tool. Announced yesterday, in their official blog post, they mention that “They’ve rolled out an entirely new project experience for the next generation with a focus on making Jira Simply Powerful.” Sean Regan, head of growth for Software Teams at Atlassian, said: “With a more streamlined and simplified application, Atlassian hopes to appeal to a wider range of business execs involved in the software-creation process.” What’s new in the revamped Jira software? Powerful tech stack: Jira Software is transformed into a modern cloud app. It now includes an updated tech stack, permissions, and UX. Developers have more autonomy, administrators have more flexibility and advanced users have more power. “Additionally, we’ve made Jira simpler to use across the board. Now, anyone who works with development teams can collaborate more easily.” Customizable workflow: To upgrade user experience, Atlassian has introduced a new feature called build-your-own-boards. Users can customize their own workflow, issue types, and fields for the board. They don’t require administrator access or the need to jeopardize other project’s customizations. Source: Jira blog This customizable workflow was inspired by Trello, the task management app acquired by Atlassian for $425 million in 2017. “What we tried to do in this new experience is mirror the power that people know and love about Jira, with the simplicity of an experience like Trello.” said Regan. Third party integrations: The new Jira comes with almost 600 third-party integrations. These third-party applications, Atlassian said, should help appeal to a broader range of job roles that interact with developers. Integrations include Adobe, Sketch, and Invision. Other integrations include Facebook's Workplace and updated integrations for Gmail and Slack. Jira Cloud Mobile: Jira Cloud mobile helps developers access their projects from their smartphones. Developers can create, read, update, and delete issues and columns; groom their backlog and start and complete sprints; respond to comments and tag relevant stakeholders, all from their mobile. Roadmapping tool: Jira now features a brand new roadmaps tool that makes it easier for teams to see the big picture. “When you have multiple teams coordinating on multiple projects at the same time, shipping different features at different percentage releases, it’s pretty easy for nobody to know what is going on,” said Regan. “Roadmaps helps bring order to the chaos of software development.” Source: Jira blog Pricing for the Jira software varies by the number of users. It costs $10 per user per month for teams of up to 10 people; $7 per user per month for teams of between 11 and 100 users; and varying prices for teams larger than 100. The company also offers a free 7-day trial. Read more about the release on the Jira Blog. You can also have a look at their public roadmap. Atlassian acquires OpsGenie, launches Jira Ops to make the incident response more powerful. GitHub’s new integration for Jira Software Cloud aims to provide teams with a seamless project management experience. Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 13594

article-image-opensky-is-now-a-part-of-the-alibaba-family
Bhagyashree R
06 Sep 2018
2 min read
Save for later

OpenSky is now a part of the Alibaba family

Bhagyashree R
06 Sep 2018
2 min read
Yesterday, Chris Keane, the General Manager of OpenSky announced that OpenSky is now acquired by the Alibaba Group. OpenSky is a network of businesses that empower modern global trade for SMBs and help people discover, buy, and share unique goods that match their individual taste. OpenSky will join Alibaba Group in two capacities: One of OpenSky’s team will become a part of Alibaba.com in North America B2B to serve US based buyers and suppliers. The other team will become a wholly-owned subsidiary of Alibaba Group consisting of OpenSky’s marketplace and SaaS businesses. In 2015, Alibaba Group acquired a minority ownership on OpenSky. In 2017, they collaborated with Alibaba’s B2B leadership team to solve the challenges faced by small businesses. According to Chris, both the companies share a common interest, which is to help small businesses: “It was thrilling to discover that our counterparts at Alibaba share our obsession with helping SMBs. We’ve quickly aligned on a global vision to provide access to markets and resources for businesses and entrepreneurs, opening new doors and knocking down obstacles.” In this announcement Chris also mentioned that they will be coming up with powerful concepts to serve small businesses everywhere, in the near future. To know more, read the official announcement on LinkedIn. Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Digitizing the offline: How Alibaba’s FashionAI can revive the waning retail industry Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 13539

article-image-kubernetes-1-14-releases-with-support-for-windows-nodes-kustomize-integration-and-much-more
Amrata Joshi
26 Mar 2019
2 min read
Save for later

Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more

Amrata Joshi
26 Mar 2019
2 min read
Yesterday, the team at Kubernetes released Kubernetes 1.14, a new update to the popular open-source container orchestration system. Kubernetes 1.14 comes with support for Windows nodes, kubectl plugin mechanism, Kustomize integration, and much more. https://twitter.com/spiffxp/status/1110319044249309184 What’s new in Kubernetes 1.14? Support for Windows Nodes This release comes with added support for Windows nodes as worker nodes. Kubernetes now schedules Windows containers and enables a vast ecosystem of Windows applications. With this release, enterprises with investments can easily manage their workloads and operational efficiencies across their deployments, regardless of the operating systems. Kustomize integration With this release, the declarative resource config authoring capabilities of kustomize are now available in kubectl through the -k flag. Kustomize helps the users in authoring and reusing resource config using Kubernetes native concepts. kubectl plugin mechanism This release comes with kubectl plugin mechanism that allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. PID Administrators can now provide pod-to-pod PID (Process IDs) isolation by defaulting the number of PIDs per pod. Pod priority and preemption in this release enables Kubernetes scheduler to schedule important pods first and remove the less important pods to create room for more important ones. Users are generally happy and excited about this release. https://twitter.com/fabriziopandini/status/1110284805411872768 A user commented on HackerNews, “The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere” To know more about this release in detail, check out Kubernetes’ official announcement. RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 13534

article-image-oracle-bid-protest-against-u-s-defence-departmentspentagon-10-billion-cloud-contract
Savia Lobo
09 Aug 2018
2 min read
Save for later

Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract

Savia Lobo
09 Aug 2018
2 min read
On Monday, Oracle Corp filed a protest with the Government Accountability Office(GAO) against Pentagon’s $10 billion JEDI(Joint Enterprise Defense Infrastructure) cloud contract. Oracle believes should not be awarded only to a single company but instead, allow for multiple winners. https://twitter.com/92newschannel/status/1027090662162944000 The U.S Defence Department unveiled the competition in July and stated that only a single winner, the one with the most rapid adoption of the cloud technology would be awarded. Deborah Hellinger, Oracle’s spokeswoman, said in a statement on Tuesday, “The technology industry is innovating around next-generation cloud at an unprecedented pace and JEDI virtually assures DoD will be locked into a legacy cloud for a decade or more. The single-award approach is contrary to the industry's multi-cloud strategy, which promotes constant competition, fosters innovation and lowers prices.” A bid protest is a challenge to the terms of a solicitation or the award of a federal contract. The GAO, which adjudicates and decides these challenges, will issue a ruling on the protest by November 14. This has been the first bid protest ever since the competition started a decade ago. Amazon.com is being seen as a top contender throughout the deal. Amazon Web Services or AWS is the only company approved by the U.S. government to handle secret and top secret data. Thus, this competition has attracted criticism from companies that fear Amazon Web Services, Amazon’s cloud unit, will win the contract. This would choke out hopes for others (Microsoft Corp (MSFT.O), Oracle (ORCL.N), IBM (IBM.N) and Alphabet Inc’s (GOOGL.O) Google) to win the government cloud computing contract. Read more about this news on The Register. Oracle makes its Blockchain cloud service generally available Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle reveals issues in Object Serialization. Plans to drop it from core Java.  
Read more
  • 0
  • 0
  • 13482
article-image-jeff-bezos-amazon-will-continue-to-support-u-s-defense-department
Richard Gall
16 Oct 2018
2 min read
Save for later

Jeff Bezos: Amazon will continue to support U.S. Defense Department

Richard Gall
16 Oct 2018
2 min read
Just days after Google announced that it was pulling out of the race to win the $10 billion JEDI contract from the Pentagon, Amazon's Jeff Bezos has stated that Amazon will continue to support Pentagon and Defense projects. But Bezos went further, criticising tech companies that don't work with the military. Speaking at Wired25 Conference, the Amazon chief said "if big tech companies are going to turn their back on U.S. Department of Defense (DoD), this country is going to be in trouble... One of the jobs of senior leadership is to make the right decision, even when it’s unpopular." Bezos remains unfazed by criticism It would seem that Bezos isn't fazed by criticism that other companies have faced. Google explained its withdrawal by saying "we couldn’t be assured that it would align with our AI Principles." However, it's likely that the significant internal debate about the ethical uses of AI, as well as a wave of protests against Project Maven earlier in the year were critical components in the final decision. Microsoft remains in the running for the JEDI contract, but there appears to be much more internal conflict over the issue. Anonymous Microsoft employees have, for example, published an open letter to senior management on Medium. The letter states: "What are Microsoft's AI Principles, especially regarding the violent application of powerful A.I. technology? How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?" Clearly, Jeff Bezos isn't too worried about upsetting his employees. Perhaps the story says something about the difference in the corporate structure of these huge companies. While they all have high-profile management teams, its only at Amazon that the single figure of Bezos reigns supreme in the spotlight. With Blue Origin he's got his sights set on something far beyond ethical decision making - sending humans into space. Cynics might even say it's the logical extension of the implicit imperialism of his enthusiasm for Pentagon.
Read more
  • 0
  • 0
  • 13391

article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 13252

article-image-tumblr-open-sources-its-kubernetes-tools-for-better-workflow-integration
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

Tumblr open sources its Kubernetes tools for better workflow integration

Melisha Dsouza
15 Jan 2019
3 min read
Yesterday, Tumblr announced the open sourcing of three tools developed at Tumblr itself, that will help developers integrate Kubernetes into their workflows. These tools were developed by Tumblr throughout their eleven-year journey to migrate their workflow to Kubernetes with ease. These are the 3 tools and their features as listed on the Tumblr blog: #1 k8s- sidecar injector Containerizing complex applications can be time-consuming. Sidecars come as a savior option, that allows developers to emulate older deployments with co-located services on Virtual machines or physical hosts. The k8s-sidecar injector dynamically injects sidecars, volumes, and environment data into pods as they are launched. This reduced the overhead and work involved in copy-pasting code to add sidecars to a developer's deployments and cronjobs. What’s more, the tool listens to the specific sidecar to be injected, contained within the Kubernetes API for Pod launch. This tool will be useful when containerizing legacy applications requiring a complex sidecar configuration. #2 k8s-config-projector The k8s-config projector is a command line tool that was generated out of the necessity of accessing a subset of configuration data (feature flags, lists of hosts/IPs+ports, and application settings) and a need to be informed as soon as this data changes. Config data defines how deployed services operate at Tumblr. Kubernetes ConfigMap resource enables users to provide their service with configuration data. It also allows them to update the data in running pods without redeployment of the application. To use this feature to configure Tumblr’s services and jobs in a Kubernetes-native manner, the team had to bridge the gap between their canonical configuration store (git repo of config files) to ConfigMaps. k8s-config-projector combines the git repo hosting configuration data with “projection manifest” files, that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. Developers can now encode a set of configuration data that the application needs to run into a projection manifest. The blog states that ‘as the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed’. #3 k8s-secret-projector Tumblr stores secure credentials (passwords, certificates, etc) in access controlled vaults. With k8s-secret-projector tool, developers will now be able to request access to subsets of credentials for a given application. This can be done now without granting the user access to the secrets as a whole. The tool ensures applications always have the appropriate secrets at runtime, while enabling automated systems including certificate refreshers, DB password rotations, etc to automatically manage and update these credentials, without the need to redeploy/restart the application. It performs the same by combining two repositories- projection manifests and credential repositories. A Continuous Integration (CI) tool like Jenkins will run the tool against any changes in the projection manifests repository. This will generate new Kubernetes Secret YAML files which will lead to the Continuous Deployment to deploy the generated and validated Secret files to any number of Kubernetes clusters. The tool will allow secrets to be deployed in Kubernetes environments by encrypting generated Secrets before they touch the disk. You can head over to Tumblr’s official blog for examples on each tool. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes
Read more
  • 0
  • 0
  • 13239
article-image-dr-fei-fei-li-googles-ai-cloud-head-steps-down-amidst-speculations-dr-andrew-moore-to-take-her-place
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Dr. Fei Fei Li, Google's AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Melisha Dsouza
11 Sep 2018
4 min read
Yesterday, Diane Greene, the CEO of Google Cloud, announced in a blog post that Chief Artificial Intelligence Scientist Dr. Fei-Fei Li will be   replaced by Dr. Andrew Moore, dean of the school of computer science at Carnegie Mellon University at the end of this year. The blog further mentions that, as originally planned, Dr. Fei-Fei Li will be returning to her professorship at Stanford and in the meanwhile, she will transition to being an AI/ML Advisor for Google Cloud. The timing of the transition following the controversies surrounding Google and Pentagon Project Maven is not lost on many. Flashback on ‘Project Maven’ protest and its outcry On March 2017 it was revealed that Google Cloud, headed by Greene, signed a secret $9m contract with the United States Department of Defense called as 'Project Maven'. The project aimed to develop an AI system that could help recognize people and objects captured in military drone footage. The contract was crucial to the Google Cloud Platform gaining a key US government FedRAMP authorization. This project was expected to assist Google in finding future government work worth potentially billions of dollars. Planned for use for non-offensive purposes only,  project Maven also had the potential to expand to a $250m deal. Google provided the Department of Defense with its TensorFlow APIs to assist in object recognition, which the Pentagon believed would eventually turn its stores of video into "actionable intelligence". In September 2017, in a leaked email reviewed by The New York Times, Scott Frohman, Google’s head of defense and intelligence sales asked Dr. Li ,Google Cloud AI’s leader and Chief Scientist, for directions on the “burning question” of how to publicize this news to the masses. To which she replied back- “Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.” As predicted by Dr. Li, the project was met with outrage by more than 3000 Google employees who believed that Google shouldn't be involved in any military work and that algorithms have no place in identifying potential targets. This caused a rift in Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. Many employees were "deeply concerned" that the data collected by Google integrated with military surveillance data for targeted killing. Fast forward to June 2018 where Google stated that it would not renew its contract (to expire in 2019) with the Pentagon. Dr. Li’s timeline at Google During her two year tenure, Dr. Li oversaw some remarkable work in accelerating the adoption of AI and ML by developers and Google Cloud customers. Considered as one of the most talented machine learning researchers in the world, Dr. Li has published more than 150 scientific articles in top-tier journals and conferences including Nature, Journal of Neuroscience, New England Journal of Medicine and many more. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a large-scale effort contributing to the latest developments in computer vision and deep learning in AI. Dr. Li has been a keynote or invited speaker at many conferences. She has been in the forefront of receiving prestigious awards for innovation and technology while being an acclaimed feature in many magazines. In addition to her contributions in the world of tech, Dr Li also is a co-founder of Stanford’s renowned SAILORS outreach program for high school girls and the national non-profit AI4ALL. The controversial email from Dr.Li can lead to one thinking if the transition was made as a result of the events of 2017. However, no official statement has been released by Google or Dr. Li on why she is moving on. Head over to Google’s Blog for the official announcement of this news. Google CEO Sundar Pichai won’t be testifying to Senate on election interference Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready      
Read more
  • 0
  • 0
  • 13235

article-image-redhat-contributes-etcd-a-distributed-key-value-store-project-to-the-cloud-native-computing-foundation-at-kubecon-cloudnativecon
Amrata Joshi
12 Dec 2018
2 min read
Save for later

RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, RedHat announced its contribution towards etcd, an open source project and its acceptance into the Cloud Native Computing Foundation (CNCF). Red Hat is participating in developing etcd, as a part of the enterprise Kubernetes product, Red Hat OpenShift. https://twitter.com/coreos/status/1072562301864161281 etcd is an open source, distributed, consistent key-value store for service discovery, shared configuration, and scheduler coordination. It is a core component of software that comes with safer automatic updates and it also sets up overlay networking for containers. The CoreOS team created etcd in 2013 and the Red Hat engineers maintained it by working alongside a team of professionals from across the industry. The etcd project focuses on safely storing critical data of a distributed system and demonstrating its quality. It is also the primary data store for Kubernetes. It uses the Raft consensus algorithm for replicated logs. With etcd, applications can maintain more consistent uptime and work smoothly even when the individual servers are failing. Etcd is progressing and it already has 157 releases with etcd v3.3.10 being the latest one that got released just two month ago. etcd is designed as a consistency store across environments including public cloud, hybrid cloud and bare metal. Where is etcd used? Kubernetes clusters use etcd as their primary data store. Red Hat OpenShift customers and Kubernetes users benefit from the community work on the etcd project. It is also used by communities and users like Uber, Alibaba Cloud, Google Cloud, Amazon Web Services, and Red Hat. etcd will be under Linux Foundation and the domains and accounts will be managed by CNCF. The community of etcd maintainers, including Red Hat, Alibaba Cloud, Google Cloud, Amazon, etc, won’t be changed. The project will continue to focus on the communities that depend on it. Red Hat will continue extending etcd with the etcd Operator in order to bring more security and operational ease. It will enable users to easily configure and manage etcd by using a declarative configuration that creates, configures, and manages etcd clusters. Read more about this news on RedHat’s official blog. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 13230
Modal Close icon
Modal Close icon