Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-data-science-announcements-missed-amazon-reinvent-2017
Savia Lobo
01 Dec 2017
8 min read
Save for later

Data science announcements at Amazon re:invent 2017

Savia Lobo
01 Dec 2017
8 min read
Continuing from our previous post, Amazon’s re:invent 2017 welcomed a lot of new announcements pertaining to three specific domains in data science: Databases, IoT, and Machine Learning. Databases Databases were one of the hot topics for the cloud giant. AWS released the preview of two new database services - Amazon Neptune and Amazon Aurora. Amazon Neptune Preview So what’s Amazon Neptune? A brand new database service from Amazon! It is a fully-managed, quick, and a reliable graph database service, which allows easy development and deployment of applications. It is built exclusively to cater a high-performance service for storing billions of relationships and for running queries within a millisecond. Neptune is highly secure, with inbuilt support for encryption. Since it is fully managed, one should rest assured about the database management tasks. Neptune backs the famous graph models such as Property Graph and W3C's RDF. It also supports their corresponding query languages such as Apache TinkerPop Gremlin and SPARQL. This allows customers to build queries with ease. Also, these queries can efficiently steer through highly associated datasets. Some of its key benefits include: high availability point-in-time recovery continuous backup to Amazon S3 replication across availability zones Amazon Aurora Amazon Aurora announced a preview of two of its new features at the Reinvent: Aurora Multi-Master and Aurora Serverless. Let’s take a brief look at what these two features have in store. Aurora Serverless It allows customers to create database instances that run only when required. This means, databases can be automatically scaled up or down based on demand, which will save a lot of your time. It is designed to handle workloads that are highly variable and are liable to rapid changes. Customers can pay for the resources they use on a second-by-second basis. This will save a lot of your money. The preview of this serverless feature would be available for MySQL-compatible edition of Amazon Aurora. Aurora Multi-Master It allows customers to distribute writes for databases over several datacenters It guarantees customers a zero application downtime to avoid failure of database nodes or availability zones Customers can also leverage a faster write performance from the software At present, Aurora Multi-Master preview is for a single region distribution. However, Amazon expects to put it to work between regions across the global physical infrastructure of AWS, by next year.   Internet of Things The next technology Amazon rooted for this year was IoT. Here’s a list of announcements made for IoT applications. AWS IoT Device Management AWS IoT Device Management allows customers to load, set up, monitor, and remotely manage IoT devices securely, throughout the device’s entire lifecycle. Customers can easily log into the AWS IoT console in order to register devices, either individually or in bulk. Further, they can also upload attributes and certificates, and access policies. It also helps customers maintain an inventory, which has all the information related to the IoT devices, such as serial numbers or firmware versions, and so on. Using this information, one can easily track where troubleshooting is required. The devices can be managed individually, in parts, or as an entire fleet.     AWS Greengrass ML inference AWS Greengrass ML inference preview lets customers deploy and run ML inferences locally on connected devices bringing in better and intelligent computing capabilities within the IoT devices. Carrying out such an inference on connected devices reduces latency and the cost associated with sending the device data to the cloud for prediction. AWS Greengrass ML inference allows app developers to incorporate machine learning within their devices; with no explicit ML skills required. It allows devices to run ML models locally, get the output, and make smart decisions rapidly; that too without being connected. It also performs explicit ML inference on connected devices without the need for sending the data to the cloud. Data is sent to the cloud only in cases that require more processing. AWS IoT Analytics Preview Re:invent gave us a preview of AWS IoT Analytics, a fully managed IoT analytics service that provides advanced data analysis of data collected from millions of IoT devices. This does not require added management of the hardware or the infrastructure. Let’s look at some of its benefits: Allows customers to have access to pre-built analytical functions, which help them with the predictive analysis of data. Allows customers to visualize analytical output from the service The tools required to clean up data have been provided Aids in identifying patterns within the gathered data In addition to this, the new AWS IoT Analytics feature offers visualization of your data through Amazon Quicksight. It also combines with Jupyter Notebooks to bring in the power of machine learning. To know more about AWS IoT in detail, you can visit the link here. Machine Learning Re:invent introduced a variety of new platforms, tools, and frameworks to leverage Machine Learning. AWS DeepLens Amazon brings an innovative way to get a hands-on deep learning experience for data scientists and developers. Their new AWS DeepLens is an AI-enabled video camera that runs deep learning models locally on the camera to analyze and take action on what it sees. The technology enables developers to build apps while getting practical, hands-on examples for AI, IoT, and serverless computing. The hardware boasts of a 4-megapixel camera that can capture 1080P video and a 2D microphone array. DeepLens has an Intel Atom® Processor with over 100 GLOPS of compute power,  for processing deep learning predictions in real time. It also has built-in 8 GB memory for storing pre-trained models and codes. On the software side, AWS DeepLens runs Ubuntu 16.04 and is preloaded with AWS Greengrass Core. Other frameworks such as TensorFlow and Caffe2, can also be used. DeepLens has The Intel® clDNN library and lets developers use AWS Greengrass, AWS Lambda, and other AWS AI and infrastructure services in their app. Amazon Comprehend Tagged as a continuously trained Natural Language Processing (NLP) service, Amazon Comprehend allows customers to analyze texts and find out everything within them. Be it the language used (from Afrikans to Yuroba and 98 more), the entities (people, places, products, etc), sentiments (positive, negative, and so on), key phrases, and much more from within the text provided.  Comprehend also has a topic modeling service that extracts topics from a large set of documents for analysis and topic-based grouping. Amazon Rekognition Video With the Rekognition Video, Amazon now has a higher say among similar others in the market. Rekognition Video uses its deep learning capabilities to derive detailed and complete insights from the videos. It allows developers to get detailed information about the objects within the videos. This also includes getting to know the scenes that the videos are set in, the activities happening within them, and so on. It also supports a feature which aids in detecting a person, for instance, it is pre-trained to recognize famous celebrities. It can also track people via a video and can filter out any inappropriate content. In short, it can easily generate metadata from within the video files. Amazon SageMaker An end-to-end Machine learning service that aids developers and data scientists in building, training, and deploying machine learning models easily and quickly, with improved scalability. It consists of three modules: Build - An environment to work with your data, experiment with the algorithms, and have a detailed output visualization. Train - Allows one-click model training and tuning, at high-scale and low cost. Deploy -  Provides a managed environment, which allows customers to easily host their models and test them securely for inference, that too with low latency. Amazon SageMaker eliminates machine learning complexities for developers. With Amazon SageMaker, customers can easily build and train their ML models in the cloud. Also, with some additional clicks, customers can also use the AWS Greengrass console in order to transfer the models to devices that they have selected. To have a detailed view of how SageMaker works, visit the link here. Amazon Translate Preview Amazon also unveiled a preview of its 'Translate', a high-quality neural machine translation service. Amazon translate uses advanced machine learning features to enable faster language translation of text-based content. Translate uses neural networks to represent models trained to translate between language pairs and allows development of applications which can allow multilingual user experiences.   Organizations and businesses can highly benefit with Translate, as they can now market their products in different regions. This means product consumers can access the websites, the information, and the resources using their language of choice using automated language translations. Additionally, customers can also engage themselves in multiplayer chats, gather information from consumer forums, dive into educational documents, and even obtain reviews about hotels even if those resources are provided in a language they can’t readily understand. Amazon Translate can be used with other Amazon services such as Amazon Polly, Amazon S3, AWS Elastic Search, Amazon Lex, AWS Lambda, and many others. Amazon Translate service is currently in preview and can be used to translate text to and from English and the supported languages. Amazon Transcribe Preview Amazon launched the preview of its Transcribe, an Automatic Speech Recognition (ASR) service. ASR makes it easy for developers to enable the speech-to-text capability into their applications. An amazing feature of Transcribe is, it has an efficient and scalable API, saving developers from the expensive processes of manual transcription. One can also analyze audio files stored on Amazon Simple Storage Service (S3) in different formats such as WAV, MP3, Flac, and so on. In fact, one can get detailed transcriptions along with the timestamps for each word, and the deduced punctuation.
Read more
  • 0
  • 0
  • 13591

article-image-mondays-google-outage-was-a-bgp-route-leak-traffic-redirected-through-nigeria-china-and-russia
Natasha Mathur
14 Nov 2018
4 min read
Save for later

Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia

Natasha Mathur
14 Nov 2018
4 min read
Google faced a major outage on Monday this week as it went down for over an hour, taking a toll on Google Search and a majority of its other services such as the Google Cloud Platform. The outage was apparently a result of Google losing control over the normal routes of its IP addresses as they instead got misdirected, due to a BGP (Border Gateway Protocol) issue, to China Telecom, Nigeria, and Russia. The issue began at 21:13 UTC when MainOne Cable Company, a carrier in Lagos, Nigeria declared its own autonomous system 37282 as the right path to reach 212 IP prefixes that belong to Google, reported ArsTechnica. Shortly after, China Telecom improperly accepted the route and further declared it worldwide, leading to Transtelecom and other large service providers in Russia to follow the same route. A networking and security company, BGPmon, who assesses the route health of networks, tweeted out on Monday that it “appears that Nigerian ISP AS37282 'MainOne Cable Company' leaked many @google prefixes to China Telecom, who then advertised it to AS20485 TRANSTELECOM (Russia). From there on others appear to have picked this up”. BGPmon also tweeted that redirection of IP addresses came in five distinct waves over a 74-minute period: https://twitter.com/bgpmon/status/1062130855072546816 Another Network Intelligence company, ThousandEyes tweeted how a “potential hijack” was underway. As per ThousandEyes, it had detected over 180 prefixes affected by this route leak, covering a wide range of Google services. https://twitter.com/thousandeyes/status/1062102171506765825 This led to a growing suspicion among many as China Telecom, a Chinese state-owned telecommunication company recently came under the spotlight for misrouting the western carrier traffic through mainland China. On further analysis, however, ThousandEyes reached a conclusion that, “the origin of this leak was the BGP peering relationship between MainOne, the Nigerian provider, and China Telecom”. MainOne is in a peering relationship with Google via IXPN in Lagos and has got direct routes to Google, that leaked into China Telecom. These routes then further got propagated from China Telecom, via TransTelecom to NTT and other transit ISPs. “We also noticed that this leak was primarily propagated by business-grade transit providers and did not impact consumer ISP networks as much”, reads the ThousandEyes blog. BGPmon further tweeted that apart from Google, Cloudflare also faced the same issue as its IP addresses followed the same route as Google’s. https://twitter.com/bgpmon/status/1062145172773818368 However, Matthew Prince, CEO, CloudFare, told Ars Technica that this routing issue was just an error and chances of it being a malicious hack was low .“If there was something nefarious afoot there would have been a lot more direct, and potentially less disruptive/detectable, ways to reroute traffic. This was a big, ugly screw up. Intentional route leaks we’ve seen to do things like steal cryptocurrency are typically far more targeted” said Prince. “We’re aware that a portion of Internet traffic was affected by the incorrect routing of IP addresses, and access to some Google services was impacted. The root cause of the issue was external to Google and there was no compromise of Google services,” a Google representative told ArsTechnica.   MainOne also updated regarding the issue on its site, saying, that it faced a “technical glitch during a planned network update and access to some of the Google services was impacted. We promptly corrected the situation at our end and are doing all that is necessary to ensure it doesn’t happen again. The error was accidental on our part; we were not aware that any Google services were compromised as a result”. MainOne further addressed the issue on Twitter saying that the problem occurred due to a misconfiguration in BGP filters: https://twitter.com/Mainoneservice/status/1062321496838885376 The main takeaway from this incident remains that doing business on the Internet is still risky and there are going to be times when it’ll lead to unpredictable and destabilizing events, that may not necessarily be ‘malicious hacks’. Basecamp 3 faces a read-only outage of nearly 5 hours GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 13586

article-image-is-ros-2-0-good-enough-to-build-real-time-robotic-applications-spanish-researchers-find-out
Prasad Ramesh
11 Sep 2018
4 min read
Save for later

Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out.

Prasad Ramesh
11 Sep 2018
4 min read
Last Friday, a group of Spanish researchers have published a research paper titled ‘Towards a distributed and real-time framework for robots: evaluation of ROS 2.0 communications for real-time robotic applications’. This paper talks about an experimental setup exploring the suitability of ROS 2.0 for real-time robotic applications. In this paper, ROS 2.0 communications is evaluated in a robotic inter-component communication hardware case running on top of Linux. The researchers have benchmarked and studied the worst case latencies and characterized ROS 2.0 communications for real-time applications. The results indicate that a proper real-time configuration of the ROS 2.0 framework reduces jitter making soft real-time communications possible but there were also some limitations that prevented hard real-time communications. What is ROS? ROS is a popular framework that provides services for the development of robotic applications. It has utilities like a communication infrastructure, drivers for a variety of software and hardware components, libraries for diagnostics, navigation, manipulation, and other things. ROS simplifies the process of creating complex and robust robot behavior across many robotic platforms. ROS 2.0 is the new version which extends the concepts of the first version. Data Distribution Service (DDS) middleware is used in ROS 2.0 due to its characteristics and benefits as compared to other solutions. Need for real-time applications in robotic systems In all robotic systems, tasks need to be time responsive. While moving at a certain speed, robots must be able to detect an obstacle and stop to avoid collision. These robot systems often have timing requirements to execute tasks or exchange data. By not meeting the timing requirements, the system behavior will degrade or the system will fail. With ROS being the standard software infrastructure for robotic applications development, demands rose in the ROS community to include real-time capabilities. Hence, ROS 2.0 was created for delivering real-time performance. But to deliver a complete, distributed and real-time solution for robots, ROS 2.0 needs to be surrounded with appropriate elements. These elements are described in the papers Time-sensitive networking for robotics and Real-time Linux communications: an evaluation of the Linux communication stack for real-time robotic applications. ROS 2 uses DDS as its communication middleware. DDS contains Quality of Service (QoS) parameters which can be configured and tuned for real-time applications. The results of the experiment In the research paper, a setup was made to measure the real-time performance of ROS 2.0 communications over Ethernet in a PREEMPT-RT patched kernel. The end-to-end latencies between two ROS 2.0 nodes in different machines was measured. A Linux PC and an embedded device which could represent a robot controller (RC) and a robot component (C) were used for the setup. An overview of the setup can be seen as follows: Source: LinkedIn Some of the results are as follows: Source: LinkedIn The image describes the Impact of RT settings under different system load. They are a) System without additional load without RT settings. b) is system under load without RT settings. c) is system without additional load and RT settings. d) is system under load and RT settings. The results from the experiment showed that a proper real-time configuration of the ROS 2.0 framework and DDS threads greatly reduces the jitter andworst-casee latencies. This mean a smooth and fast communication. However, there were also some limitations when there is noncritical traffic in the Linux Network Stack is in picture. By configuring the network interrupt threads and using Linux traffic control QoS methods, some of the problems could be avoided. The researchers conclude that it is possible to achieve soft real-time communications with mixed-critical traffic using the Linux Network stack. However hard real-time is not possible due to the aforementioned limitations. For a more detailed understanding of the experiments and results, you can read the research paper. Shadow Robot joins Avatar X program to bring real-world avatars into space 6 powerful microbots developed by researchers around the world Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 13582

article-image-us-blacklist-chinas-telecom-giant-huawei-over-threat-to-national-security
Fatema Patrawala
16 May 2019
3 min read
Save for later

US blacklist China's telecom giant Huawei over threat to national security

Fatema Patrawala
16 May 2019
3 min read
Reuters reported on Wednesday that the Trump administration hit Chinese telecoms giant Huawei by adding the company into its so called “Entity List”.  The Commerce Department said by adding Huawei Technologies and its 70 affiliates under this list means it will ban the company from acquiring components and technology from US firms without government approval. https://twitter.com/MSNBC/status/1128765546806284288 President Donald Trump has taken this decision to “prevent American technology from being used by foreign-owned entities in ways that potentially undermine US national security or foreign policy interests”, said US Secretary Wilbur Ross in a statement on Wednesday. This move has come at a very delicate time when two of the world’s largest economies fight over the tariff battle, which the US officially calls as China’s unfair trade practices. And on the same day Trump signed an executive order barring US companies from using telecommunications equipment made by firms deemed to pose a national security risk. The order signed by the President did not specify any country or company but, US officials have previously labelled Huawei a “threat” and actively lobbied allies not to use Huawei network equipment in next-generation 5G networks. In response to this Huawei on Wednesday said to ABC News that possible new U.S. restrictions on market access will have little impact on them and that these were  "unreasonable restrictions" by the United States. https://twitter.com/HuaweiFacts/status/1128904569621082112 "Restricting Huawei from doing business in the US will not make the US more secure or stronger; instead, this will only serve to limit the US to inferior yet more expensive alternatives," the telecom giant said in a statement. "In addition, unreasonable restrictions will infringe upon Huawei's rights and raise other serious legal issues," the statement said. It said it was “ready and willing to engage with the US government and come up with effective measures to ensure product security". US prosecutors had charged two Huawei units in Washington state in January as well. The charges were for conspiring to steal T-Mobile US Inc trade secrets. Last week the FCC voted unanimously to deny China Mobile's bid to provide US telecommunications services. This news comes to the market striking an urgency in US as wireless carriers roll out 5G networks. Elite US universities including MIT and Stanford break off partnerships with Huawei and ZTE amidst investigations in the US China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information Huawei launches Kirin 980, the world’s first 7nm mobile AI chip
Read more
  • 0
  • 0
  • 13581

article-image-firedomes-endpoint-protection-solution-for-improved-iot-security
Melisha Dsouza
19 Feb 2019
3 min read
Save for later

Firedome’s ‘Endpoint Protection’ solution for improved IoT security

Melisha Dsouza
19 Feb 2019
3 min read
Last month, Firedome Inc announced the launch of the world’s first endpoint cybersecurity solutions portfolio, specifically tailored to home IoT companies and manufacturers. Firedome has developed business models that allow companies to implement top-quality endpoint cybersecurity solutions to close critical security gaps that are a byproduct of the IoT era. Home IoT devices are susceptible to cyber attacks due to the lack of regulation and budget limitations. Cryptojacking, DDoS and ransomware attacks are only a few examples of cyber crimes threaten the smart home ecosystem and consumer privacy. The low margins in this industry have led to manufacturers facing trouble in implementing high-end cybersecurity solutions. Features of ‘Firedome ‘Endpoint Protection’ solution: A lightweight software agent that can easily be added to any connected device (during the manufacturing process or later on, ‘over the air’), A cloud-based AI engine that collects and analyzes aggregated data from multiple fleets around the world, produces insights from each attack (or attack attempt) and optimizes them across the board. An accompanying 24/7 SOC team that responds to alerts, runs security researches and supports Firedome customers. Firedome solution adds a dynamic layer of protection and is not only designed to prevent attacks from occurring in the first place but also to identify attack attempts and respond to breaches in real time, thereby eliminating damage potential until a firmware update is released. The Firedome Home Solution enables industry players to provide their consumers with cyber protection and security insights for the entire home network. Moti Shkolnik, Firedome’s Co-founder and CEO says that: “We are very excited to formally launch our suite of services and solutions for the home IoT industry and we strongly believe they have the potential of changing the Home IoT cybersecurity landscape. Device companies and other ecosystem players are craving a solution that is tailored to their needs and business constraints, a solution that will address the vulnerability that is so evident in endpoint devices. Home IoT devices are becoming a commodity and the industry must address these vulnerabilities sooner rather than later. That’s why our solution is a ‘must-have’ rather than a ‘nice-to-have’” These solutions provided by Firedome has led to its selection by Universal Electronics Inc., the worldwide leader in universal control and sensing technologies for the smart home, to provide Cybersecurity Features to the Nevo® Butler Digital Assistant Platform product. To know more about this news in detail, head over to Firedome’s official website. California passes the U.S.’ first IoT security bill IoT Forensics: Security in an always connected world where things talk AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more
Read more
  • 0
  • 0
  • 13579

article-image-google-open-sources-its-robots-txt-parser-to-make-robots-exclusion-protocol-an-official-internet-standard
Bhagyashree R
02 Jul 2019
3 min read
Save for later

Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, Google announced that it has teamed up with the creator of Robots Exclusion Protocol (REP), Martijn Koster and other webmasters to make the 25 year old protocol an internet standard. The REP, better known as robots.txt, is now submitted to IETF (Internet Engineering Task Force). Google has also open sourced its robots.txt parser and matcher as a C++ library. https://twitter.com/googlewmc/status/1145634145261051906 REP was created back in 1994 by Martijn Koster, a software engineer who is known for his contribution in internet searching. Since its inception, it has been widely adopted by websites to indicate whether web crawlers and other automatic clients are allowed to access the site or not. When any automatic client wants to visit a website it first checks for robots.txt that shows something like this: User-agent: * Disallow: / The User-agent: * statement means that this applies to all robots and Disallow: / means that the robot is not allowed to visit any page of the site. Despite being used widely on the web, it is still not an internet standard. With no set in stone rules, developers have interpreted the “ambiguous de-facto protocol” differently over the years. Also, it has not been updated since its creation to address the modern corner cases. This proposed REP draft is a standardized and extended version of REP that gives publishers fine-grained controls to decide what they like to be crawled on their site and potentially shown to interested users. The following are some of the important updates in the proposed REP: It is no longer limited to HTTP and can be used by any URI-based transfer protocol, for instance, FTP or CoAP. Developers need to at least parse the first 500 kibibytes of a robots.txt. This will ensure that the connections are not open for too long to avoid any unnecessary strain on servers. It defines a new maximum caching time of 24 hours after which crawlers cannot use robots.txt. This allows website owners to update their robots.txt whenever they want and also avoid the overloading robots.txt requests by crawlers. It also defines a provision for cases when a previously accessible robots.txt file becomes inaccessible because of server failures. In such cases the disallowed pages will not be crawled for a reasonably long period of time. This updated REP standard is currently in its draft stage and Google is now seeking feedback from developers. It wrote, “we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.” To know more in detail check out the official announcement by Google. Also, check out the proposed REP draft. Do Google Ads secretly track Stack Overflow users? Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers
Read more
  • 0
  • 0
  • 13578
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-announces-new-policy-changes-for-employees-to-report-misconduct-amid-complaints-of-retaliation-and-harassment
Sugandha Lahoti
26 Apr 2019
4 min read
Save for later

Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment

Sugandha Lahoti
26 Apr 2019
4 min read
Google has announced new policy changes to address employee concerns about misconduct and harassment, following a series of massive protests. These changes were announced in an email, sent out by Google’s global director of diversity, equity, and inclusion Melonie Park to Googlers. Later yesterday, a blog post was published publicly introducing the new policy updates. “We want every Googler to walk into a workplace filled with dignity and respect,” Melonie Parker, wrote in the email. Google’s misconduct-related policy updates First, the company is building a new dedicated website to help employees bring up issues on misconduct and harassment in a simpler and clearer way. They will also be providing a similar site for temp and vendor workers, which is scheduled to go live by June. They have also internally published their fifth annual Investigations Report, a summary of employee-related misconduct investigations. They have also shared (internally) a new Investigations Practice Guide outlining how concerns are handled within Employee Relations to explain what employees can expect during the investigations process. What is shared publicly is Google’s workplace policies on harassment, discrimination, retaliation, standards of conduct, and workplace conduct. Google is also expanding its Support Person Program where Googlers can bring a trusted colleague during their harassment and discrimination investigations. They have also rolled a new Investigations Care Program to provide better care to Googlers during and after an investigation. Google’s workplace issues since the past year In November last year, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. The Walkout was planned after The New York Times brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. In the six months since the walkout, Google’s workplace issues have steadily continued. Just last week, two Google Walkout organizers accused the company of retaliation against them over the protest. The two Google employees, Claire Stapleton, YouTube Marketing Manager and Meredith Whittaker the head of Google’s Open Research were told their roles would change dramatically including calls to abandon AI ethics work, demotion, and more. Yesterday, an unidentified individual filed a complaint with the National Labor Relations Board accusing Google of violating federal law by retaliating against an employee. The case involves an alleged violation of a New Deal-era ban on punishing employees for involvement in collective action related to working conditions, according to Bloomberg. Google has previously partially acknowledged only one demand from the walkout organizers' original demands: ending forced arbitration for all its full-time employees but not for Google’s temporary and contract workers. It has also lifted the ban on class action lawsuits for the employees. [box type="shadow" align="" class="" width=""]The complete demands laid out by the Google employees are as follows: -An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. A commitment to end pay and opportunity inequity. -A publicly disclosed sexual harassment transparency report. -A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. -Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board.[/box] Yet, the fight is not over for Walkout organizers. Two of their original demands for putting an employee representative on the company’s board of directors and having the chief diversity officer report directly to the CEO has received no response from Google. Currently, Melonie Parker reports to VP of people operations Eileen Naughton instead of CEO Sundar Pichai. There were some scathing comments about Eileen by an x-googler who quit earlier this year on ethical grounds. https://twitter.com/mcmillen/status/1121418157313409024 https://twitter.com/mcmillen/status/1121420553787723776 People also raised questions about how Google is going to make up for their misconduct towards its ex-employees? https://twitter.com/VidaVakil/status/1121581294972878848 Techworkersco also added, “What we see here is Google management scrambling to placate workers in the face of serious claims that the company retaliates." Google employees ‘Walkout for Real Change’ today. These are their demands. #GoogleWalkout organizers face backlash at work, tech workers show solidarity Google TVCs write an open letter to Google’s CEO; demands for equal benefits and treatement.
Read more
  • 0
  • 0
  • 13573

article-image-dropbox-paper-leaks-out-email-addresses-and-names-on-sharing-document-publicly
Amrata Joshi
27 Sep 2019
3 min read
Save for later

‘Dropbox Paper’ leaks out email addresses and names on sharing document publicly

Amrata Joshi
27 Sep 2019
3 min read
This week, Koen Rouwhorst, a security engineer at Framer, reported that a feature of Dropbox Paper, a document collaboration tool, leaks out, “the full name and email address of _any_ Dropbox user whoever opened that document, which seems problematic.” https://twitter.com/koenrh/status/1176523837866946561 https://twitter.com/koenrh/status/1176794225075204097   Dropbox Support responded that their privacy considerations were built into how they designed their features. Also, according to the support team, displaying this information is required for enabling collaboration and security features for their users. Also, admins and users receive additional control over who can view a Paper doc. According to The Register, “if someone gets to know the link, because in your enthusiasm you posted it on social media, or sent to your contact and they posted it, they may click the link and visit the page. On arrival, if they are logged into Dropbox, a warning displays, though in faint type, that says -when you open a doc, your name, email, avatar photo and viewer and visit information is always visible to other people in it.” Though Dropbox differentiates between active and inactive viewers, this information will remain with Dropbox even after the user has left the page,  Anyone who has logged into the document will be able to see the names and email addresses of others. However, when a user clicks the link without being logged into Dropbox, the user will be shown to other users as a guest, and won’t be able to comment or edit on the document. Users may be logged into Dropbox by default so they might see a warning and, if they proceed, they would end up sharing their name and email address. This works while working with a team where people know each other. As per Dropbox’s permissions page, a user can create a private document that’s not inside of a folder and they should be the only person editing it. While sharing the doc with others, the user can choose who can open the doc and who can comment or edit. In case a user creates a doc within a folder then all the members of that folder can open, search for, and edit the doc. Users on HackerNews seem to be sceptical about this feature, a user commented on the thread, “Not only that, but Dropbox lets you pick any publicly visible document that's been viewed by a large number of peopl and easily spam them simply by writing @doc. I may have just pissed off a lot of people with my experiment. I realized immediately afterwards how reckless that was, but Dropbox - WTF? Why is this even allowed?” Few others are complaining about not being notified about the warning, “I just created a Paper document on my Dropbox account and then viewed it on another account. As best I can tell, Dropbox saying there is a notification is a lie. I did not get a visible notification when creating it although there may have been one buried under some links or button. Paper documents are publicly editable by default if you have the url.” Other interesting news in data Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more  
Read more
  • 0
  • 0
  • 13572

article-image-github-is-bringing-back-game-off-its-sixth-annual-game-building-competition-in-november
Natasha Mathur
16 Oct 2018
3 min read
Save for later

GitHub is bringing back Game Off, its sixth annual game building competition, in November

Natasha Mathur
16 Oct 2018
3 min read
The GitHub team announced yesterday that they’re coming back with their sixth annual game jam, called Game Off in November. In Game Off, participants are given one-month to create games based on a theme provided by GitHub. Anyone including newbies and professional game developers can participate, without any restrictions. Moreover, the game can be simple or complex, depending upon your preference. The Game Off team recommends using open source game engines, libraries, and tools but you can make use of any technology you want, as long as it is reliable. Also, both team and solo participation are acceptable. In fact, you can also make multiple submissions. The theme for the game off last year was “throwback”. There were over 200 games created including old-school LCD games, retro flight simulators, squirrel-infested platformers, etc. This year’s theme will be announced on Thursday, November 1st, at 13:37 pm. Last year Game off’s overall winner and the one that was voted best gameplay was a game called Daemon vs. Demon. This game included a hero that was supposed to slay rogue demons to continue remaining in the world of the living. This game was built by a user named Securas from Japan, with the open source Godot game engine. There were other categories such as best audio, best theme interpretation, best graphics, etc, for which winners were picked. To participate in Game Off, it is necessary for you to have a GitHub account. Then, you can join the Game Off challenge on itch.io. You don’t need to have a separate itch.io account, you can simply log in with your GitHub account. Once you’re done with creating an itch.io account, all you need to do is create a new repository to store the source code and other related assets. Just make sure that you push your changes to the game before December 1st. “As always, we'll highlight some of our favorites games on the GitHub Blog, and the world will get to enjoy (and maybe even contribute to or learn from) your creations”, mentions the GitHub team. For more information, check out the official Game off announcement. Meet wideNES: A new tool by Nintendo to let you experience the NES classics again Meet yuzu – an experimental emulator for the Nintendo Switch Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service
Read more
  • 0
  • 0
  • 13570

article-image-mark-zuckerberg-just-became-the-target-of-the-worlds-first-high-profile-white-hat-deepfake-op-can-facebook-come-out-unscathed
Vincy Davis
12 Jun 2019
6 min read
Save for later

Zuckerberg just became the target of the world's first high profile white hat deepfake op. Can Facebook come out unscathed?

Vincy Davis
12 Jun 2019
6 min read
Yesterday, Motherboard reported that a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. https://twitter.com/motherboard/status/1138536366969688064 Motherboard mentions that the video has been created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny. Previously, Canny in partnership with Posters  has created several such deepfake videos of Donald Trump, Kim Kardashian etc. Omer Ben-Ami, one of the founders of Canny says that the video is made to educate the public on the uses of AI and to make them realize the potential of AI. But according to other news sources the video created by the artists is to test Facebook’s no takedown policy on fake videos and misinformation for the sake of retaining their “educational value”. Recently Facebook received strong criticism for promoting fake videos on its platform. In May, the company had refused to remove a doctored video of senior politician Nancy Pelosi. Neil Potts, Public Policy Director of Facebook had stated that if someone posted a doctored video of Zuckerberg, like one of Pelosi, it would stay up. Around the same time, Monika Bickert, vice president for Product Policy And Counterterrorism at Facebook had said for the fake video of Nancy Pelosi,“Anybody who is seeing this video in News Feed, anyone who is going to share it to somebody else, anybody who has shared it in the past, they are being alerted that this video is false”. Bickert also added that, “And this is part of the way that we deal with misinformation.” Following all of this it seems that the stance on Mark Zuckerberg’s fake video went through a test and it passed. As Instagram spokesperson comments that it will stay up on the platform but will be removed from recommendation surface. “We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson for Instagram told Motherboard. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.” The fake Mark Zuckerberg video is a short one in which he talks about Facebook’s power, the video says, “Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures, I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.” The video is also framed with broadcast chyrons which reads, “Zuckerberg: We're increasing transparency on ads. Announces new measures to protect elections.” This was created in order to make it appear like a usual news report. [box type="shadow" align="" class="" width=""]As the video is fake and unauthentic, we have not added link to the video in our article.[/box] The audio in the video sounds much like a voiceover, but without any sync issues and is loud and clear. However the visuals are almost accurate. In this deepfake video, the person shown can blink, move seamlessly and also gesture the way Zuckerberg would do. Motherboard reports that the visuals in the video are taken from a real video of Zuckerberg in September 2017, when he was addressing the Russian election interference on Facebook. The Instagram post containing the video, stated that it’s created using CannyAI's video dialogue replacement (VDR) technology. In a statement to Motherboard, Omer Ben-Ami, said that for the Mark Zuckerberg deepfake, “Canny engineers arbitrarily clipped a 21-second segment out of the original seven minute video, trained the algorithm on this clip as well as videos of the voice actor speaking, and then reconstructed the frames in Zuckerberg's video to match the facial movements of the voice actor.” Omer also mentions that “the potential of AI lies in the ability of creating a photorealistic model of a human being. It is the next step in our digital evolution where eventually each one of us could have a digital copy, a Universal Everlasting human. This will change the way we share and tell stories, remember our loved ones and create content” A CNN reporter has tweeted that CBS is asking Facebook to remove the fake Zuckerberg video because it shows the CBS logo on it, “CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark”. Apparently the fake video of Zuckerberg has garnered some good laughs among the community. It is also seen as a next wave in the battle to fight misinformation on social media sites. A user on Hacker News says, “I love the concept of this. There's no better way to put Facebook's policy to the test than to turn it against them.” https://twitter.com/jason_koebler/status/1138515287853228032 https://twitter.com/ezass/status/1138592610363174913 But many users are also concerned that if a fake video can look so accurate now, it’s going to be a challenge to identify which information is true and which is false. A user on Reddit comments that “This election cycle will be a dry run for the future. Small ads, little bits of phrases and speeches will stream across social media. If it takes hold, I fear for the future. We will find it very, very difficult to know what is real without a large social change, as large as the advent of social media in the first place.” Another user adds “I'm routinely surprised by the number of people unaware just how far this technology has progressed just in the past three years, as well as how many people are completely unaware it exists at all. At this point, I think that's scarier than the tech itself.” And another one comments, “True. Also the older generation. I can already see my grandpa seeing a deepfake on Fox news and immediately considering it gospel without looking into it further.” US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook argues it didn’t violate users’ privacy rights and thinks there’s no expectation of privacy because there is no privacy on social media Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report
Read more
  • 0
  • 0
  • 13567
article-image-fyne-1-0-released-as-a-cross-platform-gui-in-go-based-on-material-design
Sugandha Lahoti
25 Mar 2019
2 min read
Save for later

Fyne 1.0 released as a cross-platform GUI in Go based on Material Design

Sugandha Lahoti
25 Mar 2019
2 min read
Last week, Wednesday marked the first major milestone for Fyne, which is a cross-platform GUI written in Go. Fyne 1.0 uses OpenGL to provide cross-platform graphics and the entire toolkit is developed using scalable graphics. The Fyne toolkit communicates with operating system graphics using OpenGL, which is supported on almost all desktop and laptop systems. To do this, it relies on the built-in functionality of Cgo, the C language bridge for Go. For packaging, it uses fyne package command to generate and package all the required metadata for an application to distribute on macOS, Linux, or Windows. By default, it will build an application bundle for the current platform, which can be used in part of a cross-compilation workflow. What’s new in Fyne 1.0? Canvas API (rect, line, circle, text, image) Widget API (box, button, check, entry, form, group, hyperlink, icon, label, progress bar, radio, scroller, tabs, and toolbar) Light and dark themes Pointer, key and shortcut APIs (generic and desktop extension) OpenGL driver for Linux, macOS, and Windows Tools for embedding data and packaging releases Currently, the release only supports desktop applications. For more info, read Fyne’s blog. You may also check out Hands-On GUI Application Development in Go to learn more about Go programming. Introducing Web High-Level Shading Language (WHLSL): A graphics shading language for WebGPU State of Go February 2019 – Golang developments report for this month released Golang just celebrated its ninth anniversary
Read more
  • 0
  • 0
  • 13565

article-image-tensorflow-2-0-is-coming-heres-we-can-expect
Richard Gall
15 Aug 2018
3 min read
Save for later

TensorFlow 2.0 is coming. Here's what we can expect.

Richard Gall
15 Aug 2018
3 min read
The last couple of months have seen TensorFlow releases coming thick and fast. Clearly, the Google team are working hard to ship new updates for a framework that seems to be defining deep learning as we know it. But TensorFlow 2.0 remains on the horizon - and that, really, is the release we've all been waiting for. Amid speculation and debate, we now have the first inkling of what we can expect thanks to a post by Google Brain engineer Martin Wicke. In a somewhat unassuming post on Google Groups, Wicke said that work was underway on TensorFlow 2.0, with a preview version expected later this year. The big changes that the team are working towards include: Making TensorFlow easier to learn and use by putting eager execution (TensorFlow's programming environment) at the center of the new release Support for more platforms and languages Removing deprecated APIs How you can support the TensorFlow 2.0 design process Wicke writes that TensorFlow 2.0 still needs to go through a public review process. To do this, the project will be running a number of public design reviews that run through the proposed changes in detail and give users the opportunity to give feedback and communicate their views. What TensorFlow 2.0 means for the TensorFlow project Once TensorFlow 2.0 is released migration will be essential - Wicke explains that "We do not anticipate any further feature development on TensorFlow 1.x once a final version of TensorFlow 2.0 is released" and that the project "will continue to issue security patches for the last TensorFlow 1.x release for one year after TensorFlow 2.0’s release date." The end of tf.contrib? TensorFlow 2.0 will bring an end (of sorts) to tf.contrib, the repository where code contributed to TensorFlow sits, waiting to be merged. "TensorFlow’s contrib module has grown beyond what can be maintained and supported in a single repository." Wicke writes. "Larger projects are better maintained separately, while we will incubate smaller extensions along with the main TensorFlow code." However, Wicke promises that TensorFlow will help the owners of contributed code to migrate appropriately. Some modules could be integrated into the core project, others moved into another, separate repository, and others simply removed entirely. If you have any questions about TensorFlow 2.0 you can get in touch with the team directly by emailing discuss@tensorflow.org.  TensorFlow has also set up a mailing list for anyone interested in regular updates - simply subscribe to developers@tensorflow.org. Read next Why Twitter (finally!) migrated to Tensorflow Python, Tensorflow, Excel and more – Data professionals reveal their top tools Can a production ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 13564

article-image-chrome-76-beta-released-with-dark-mode-flash-blocking-by-default-new-pwa-features-and-more
Sugandha Lahoti
14 Jun 2019
3 min read
Save for later

Chrome 76 Beta released with dark mode, flash blocking by default, new PWA features and more

Sugandha Lahoti
14 Jun 2019
3 min read
Yesterday, Google released Chrome 76 beta with number of features which includes blocking Flash by default, a dark mode, and making it harder for sites to detect when you’re using Incognito Mode to get around paywalls. https://twitter.com/GoogleChromeDev/status/1139246837024509952 Blocks Flash by default The Chrome 76 beta by default blocks Flash in the browser. Users still have the option to switch back to the current “Ask first” option in [chrome://settings/content/flash]. Per this option, explicit permission is required for each site after every browser restart. Changes to Payments API Chrome 76 has released a fix in the FilesystemsAPI to address how websites are able to detect if you’re using Incognito to get around a paywall. FileSystem API is updated so that “detect private mode” scripts can no longer take advantage of that indicator. Chrome 76 Beta now also makes it easier to use the payments APIs for self-signed certificates on the local development environment. https://twitter.com/paul_irish/status/1138471166115368960   Additionally, PaymentRequestEvent has a new method called changePaymentMethod() and the PaymentRequest object now supports an event handler called paymentmethodchange. You can use both to notify a merchant when the user changes payment instruments. The former returns a promise that resolves with a new PaymentRequest instance. Improvements for Progressive Web Apps Chrome 76 Beta makes it easier for users to install Progressive Web Apps on the desktop by adding an install button to the omnibox. On mobile, developers can now replace Chrome’s Add to Home Screen mini-infobar with their own prompt. PWAs will also check for updates more frequently starting with Chrome 76 - checking every day, instead of every three days. New Dark mode Chrome 76 Beta also adds the Dark Mode. Websites can now automatically enable dark modes and respect user preference by adding a little bit of extra code in the prefers-color-scheme media query. Other improvements Browsers prevent calls to abusable APIs (like popup, fullscreen, vibrate, etc.) unless the user activates the page through direct interactions. However, not all interactions trigger user activation. Going forward, the escape key is no longer treated as a user activation. Chrome 76 beta introduces a new HTTP request header that sends additional metadata about a request's provenance to the server to allow it to make security decisions. Lazyload feature policy has been removed. This policy was intended to allow developers to selectively control the lazyload attribute on the iframe and img tags to provide more control over loading delay for embedded content and images on a per origin basis. The stable release of Chrome 76 is tentatively scheduled for July 30th. You can read about additional changes on Google’s Chromium blog post. Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users. Google Chrome will soon support LazyLoad, a solution to lazily load below-the-fold images and iframes. Mozilla puts “people’s privacy first” in its browser with updates to Enhanced Tracking Protection, Firefox Lockwise and Monitor
Read more
  • 0
  • 0
  • 13561
article-image-uber-becomes-a-gold-member-of-the-linux-foundation
Savia Lobo
15 Nov 2018
2 min read
Save for later

Uber becomes a Gold member of the Linux Foundation

Savia Lobo
15 Nov 2018
2 min read
Yesterday, at Uber Open Summit 2018, the company announced that it is joining the Linux Foundation as a Gold Member with a promise to support the open source community via the Linux Foundation. Jim Zemlin, Executive Director of the Linux Foundation, said, “Uber has been influential in the open source community for years, and we’re very excited to welcome them as a Gold member at the Linux Foundation. Uber truly understands the power of open source and community collaboration, and I am honored to witness that first hand as a part of Uber Open Summit 2018.” By being a member, Uber will support the Linux Foundation’s mission and help the community in building ecosystems that accelerate open source technology development. Uber will also work towards solving complex technical problems and further promote open source adoption globally. Zemlin said, “Their expertise will be instrumental for our projects as we continue to advance open solutions for cloud-native technologies, deep learning, data visualization and other technologies that are critical to businesses today.” Thuan Pham, Uber CTO, said, “The Linux Foundation not only provides homes to many significant open source projects but also creates an open environment for companies like Uber to work together on developing these technologies. We are honored to join the Linux Foundation to foster greater collaboration with the open source community.” To know more about this membership in detail, head over to Uber Engineering. Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story? Uber announces the 2019 Uber AI Residency
Read more
  • 0
  • 0
  • 13560

article-image-day-1-of-chrome-dev-summit-2018-new-announcements-and-googles-initiative-to-close-the-gap-between-web-and-native
Sugandha Lahoti
13 Nov 2018
4 min read
Save for later

Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native

Sugandha Lahoti
13 Nov 2018
4 min read
The 6th Chrome Dev Summit 2018 is being hosted on the 12th and 13th of this month in San Francisco. Yesterday, Day 1 of the summit was opened by Ben Galbraith, the director of Chrome, to talk about “the web platform’s latest advancements and the evolving landscape.” Leading web developers described their modern web experiences as well. Major Chrome Dev Summit 2018 announcements included web.dev, a new developer resource website, and a demonstration of VisBug, a browser-based visual development tool. The summit also included a demo of a new web tool called Squoosh that can downsize, compress, and reformat images. The Chrome Dev Summit 2018 also highlighted some of the browser APIs currently in development, including Web Share Target, Wake Lock, WebHID and more. It also featured a Writable File API currently under development, which would allow web apps to edit local files. New web-based tools and resources web.dev The web.dev resource website provides an aggregation of information for modern Web APIs. It helps users monitor their sites over time to ensure that they can keep their site fast, resilient and accessible. web.dev is created in partnership with Glitch, and has a deep integration with Google’s Lighthouse tool. VisBug Another developer tool VisBug helps developers easily edit a web page using a simple point-and-click and drag and drop interface. This is an improvement over Firebug, Google’s previous tool, which used the website’s source code. VisBug is currently available as a Chrome extension that can be installed from the main Chrome Web Store. Squoosh The Squoosh tool allows you to encode images using best-in-class codecs like MozJPEG, WebP, and OptiPNG. It works cross-browser and offline, and ALL codecs supported even in a browser with no native support using WASM. The app is able to do 1:1 visual comparison of the original image and its compressed counterpart, to help users understand the pros and cons of each format. Closing the gap between web and native Google is also taking initiatives to close the gap between the web and native and make it easy for developers to build great experiences on the open web. Regarding this, Chrome will work with other browser vendors to ensure interoperability and get early developer feedback. Proposals will be submitted to the W3C Web Incubator Community Group for feedback. According to Google, this open development process will be “no different than how we develop every other web platform feature.” The first initiative in this aspect is the writable files API. The Writable Files API Currently, under development, the writable files API is designed to increase the interoperability of web applications with native applications. Users can choose files or directories that a web app can interact with on the native file system. They don’t have to use a native wrapper like Electron to ship their web app. With the Writable Files API, users can create a simple, single file editor that opens a file, allows the user to edit it, and save the changes back to the same file. People were surprised that it was Google who jumped on this process rather than Mozilla which has already implemented version of a lot of these APIs. A hacker news user said, “I guess maybe not having that skin in the game anymore prevented those APIs from becoming standardized? But these are also very useful for desktop applications. Anyways, this is a great initiative, it's about time a real effort was made to close that gap.” Here’s a video playlist of all the Chrome Dev Summit sessions so far. Tune into Google’s livestream to follow the rest of the sessions of the day and watch this space for more exciting announcements. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Read more
  • 0
  • 0
  • 13560
Modal Close icon
Modal Close icon