Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-as-pichai-defends-googles-integrity-ahead-of-todays-congress-hearing-over-60-ngos-ask-him-to-defend-human-rights-by-dropping-dragonfly
Natasha Mathur
11 Dec 2018
3 min read
Save for later

As Pichai defends Google’s “integrity” ahead of today’s Congress hearing, over 60 NGOs ask him to defend human rights by dropping DragonFly

Natasha Mathur
11 Dec 2018
3 min read
Google CEO, Sundar Pichai, is going to testify before the House Judiciary Committee today. He has submitted a written testimony to the House Committee ahead of the hearing. Pichai points out in the testimony that there is no “political bias” within the company. “I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests”. He also talks about data security emphasizing that protecting the privacy and security of their users has always been an “essential mission” for the organization. Pichai adds how Google has been consistently putting in an enormous amount of work over the past years to bring “choice, transparency, and control” to its users. Pichai also highlighted how users look up to Google for accurate and trusted information, and how they work very hard at Google to maintain the “integrity” of their products, in order to live up to their standards. The testimony further talks about Google’s contribution to the US economy and military, pointing out that despite Google’s expansion and growth into new markets, it will always have “American roots”. Now, although the hearing titled “Transparency & Accountability: Examining Google and its Data Collection, Use, and Filtering Practices” will be focussed around discussions regarding the potential bias and need for transparency within Google, its infamous project Dragonfly will also almost certainly be discussed. Google has been facing continued criticism for its censored Chinese search engine which was revealed earlier this year in a bombshell report by the Intercept. Yesterday, more than 60 NGOs as well as individuals including Edward Snowden, signed an open letter protesting against Google’s Project Dragonfly and its other plans for China. “We are disappointed that Google in its letter of 26th October failed to address the serious concerns of human rights groups over Project Dragonfly”, reads the letter addressed to Pichai. It talks about how Google’s response along with other details about Project Dragonfly only intensifies the fear that Google may compromise its commitments to human rights to gain an access to the Chinese search market. The letter also sheds light on new details leaked to the media suggesting if Google launches Project Dragonfly then it would accelerate “repressive state censorship, surveillance, and other violations” affecting almost a billion people in China. The letter also talks about how despite Google stating that it’s “not close” to launching a search product in China and that it’ll consult with key stakeholders before doing so, media reports say otherwise. The media reports based on an internal Google memo suggested that the project was in a ‘pretty advanced state’ and that the company had invested extensive resources for the development of this project. “We welcome that Google has confirmed the company “takes seriously” its responsibility to respect human rights. However, the company has so far failed to explain how it reconciles that responsibility with the company’s decision to design a product purpose-built to undermine the rights to freedom of expression and privacy”, reads the letter. Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept Google employees join hands with Amnesty International urging Google to drop Project Dragonfly OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 12316

article-image-openzeppelin-2-0-rc-1-framework-for-writing-ethereum-secure-smart-contracts-is-out
Bhagyashree R
10 Sep 2018
3 min read
Save for later

OpenZeppelin 2.0 RC 1, a framework for writing secure smart contracts on Ethereum, is out!

Bhagyashree R
10 Sep 2018
3 min read
After concluding the release cycle of version 1.0 last month, OpenZeppelin marked the start of another release cycle by launching OpenZeppelin 2.0 RC 1 on September 7th. This release is aimed to deliver reliable updates to their users, unlike some of the earlier releases, which were backwards-incompatible. OpenZeppelin is a framework of reusable smart contracts for Ethereum and other EVM and eWASM blockchains. You can build distributed applications, protocols, and organizations in Solidity language. What's new in OpenZeppelin 2.0 RC 1? Changes This release provides a more granular system of roles, like the MinterRole. Similar to Ownable, the creator of a contract is assigned all roles at first, but they can selectively give them out to other accounts. Ownable contracts is now moved to role based access. To increase encapsulation, all state variables are now private. This means that derived contracts cannot directly access the state variables, but have to use getters. All event names have been changed to be consistently in the past tense except those which are defined by an ERC. ERC721 is now separated into different optional interfaces - Enumerable and Metadata. ERC721Full has both the extensions. In SafeMath, require is used instead of assert. The ERC721.exists function is now internal. Earlier, SplitPayment allowed deployment of an instance with no payees. This will cause every single call to claim to revert, making all Ether sent to it lost forever. The preconditions on SplitPayment constructor arguments are now changed to prevent this scenario. The IndividuallyCappedCrowdsale interface is simplified by removing the concept of user from the crowdsale flavor. The setGroupCap function, which takes an array is also removed, as this is not customary across the OpenZeppelin API. ERC contracts have all been renamed to follow the same convention. The interfaces are called IERC##, and their implementations are ERC##. ERC20.decreaseApproval is renamed to decreaseAllowance, and its semantics are also changed to be more secure. MerkleProof.verifyProof is renamed to MerkleProof.verify. ECRecovery is renamed to to ECDSA, and AddressUtils to Address. Additions ERC165Query is added to query support for ERC165 interfaces. A new experimental contract is added to migrate ERC20 tokens with an opt-in strategy. A modulo operation, SafeMath.mod is added to get the quotient. Added Math.average. Added ERC721Pausable. Removed Restriction on who can release funds in PullPayments, SplitPayment, PostDeliveryCrowdsale, RefundableCrowdsale is removed. ERC20Basic is removed, now there's only ERC20. The Math.min64 and Math.max64 functions are now removed,  left only the uint256 variants. The Mint and Burn events are removed from ERC20Mintable and ERC20Burnable. A few contracts that were not generally secure enough are removed: LimitBalance, HasNoEther, HasNoTokens, HasNoContracts, NoOwner, Destructible, TokenDestructible, CanReclaimToken. You can install the release candidate by running the npm install openzeppelin-solidity@next command. To read more about OpenZeppelin 2.0 RC 1, head over to OpenZeppelin’s GitHub repository. The trouble with Smart Contracts Ethereum Blockchain dataset now available in BigQuery for smart contract analytics How to set up an Ethereum development environment [Tutorial]  
Read more
  • 0
  • 0
  • 12316

article-image-introducing-aws-deepracer-a-self-driving-race-car-and-amazons-autonomous-racing-league-to-help-developers-learn-reinforcement-learning-in-a-fun-way
Amrata Joshi
29 Nov 2018
4 min read
Save for later

Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way

Amrata Joshi
29 Nov 2018
4 min read
Yesterday, at the AWS re:Invent conference, Andy Jassy, CEO at Amazon Web Services introduced AWS DeepRacer and announced a global autonomous AWS DeepRacer racing league. Amazon DeepRacer AWS DeepRacer is a 1/18th scale radio-controlled, self-driving four-wheel race car which has been designed to help developers learn about reinforcement learning. This car features a 4-megapixel camera with 1080p resolution, an Intel Atom processor, multiple USB ports, and a 2-hour battery. The car comes with a 4GB RAM and 32GB expandable storage. The compute battery is a 13600mAh USB-C PD. It is embedded with an accelerometer and gyroscope. The console, simulator, and car are a great combination to experiment with RL algorithms and generalization methods. It includes a fully-configured cloud environment that users can use to train Reinforcement Learning models. This car also uses a camera to view the track and a reinforcement model to control throttle and steering. AWS DeepRacer is integrated with Amazon SageMaker to take advantage of its new reinforcement learning model and with AWS RoboMaker in order to provide a 3D simulation environment. It is also integrated with Amazon Kinesis Video Streams for the video streaming of virtual simulation footage and Amazon S3 for model storage. It also comes with support for Amazon CloudWatch for log capture. AWS DeepRacer League The AWS DeepRacer League gives users an opportunity to compete in a global racing championship to advance to AWS DeepRacer Championship Cup at re:Invent 2019 and to probably win the AWS DeepRacer Cup. This league is categorized into 2 categories, live events, and virtual events. Live Events Developers can compete by submitting their already built or new reinforcement learning models to the virtual leaderboard for the Summit. The top ten champions will compete in live races on the track, using AWS DeepRacer. The summit winners and top performers across the races will qualify for the AWS DeepRacer Championship Cup. The AWS DeepRacer League will be launched in AWS Summit locations around the world, including Tokyo, London, Sydney, Singapore, and New York in early 2019. Virtual events Developers can build RL models and compete online using the AWS DeepRacer console. The virtual races will take place on the challenging tracks in the 3D racing simulator. What is in store for the developers? Learn reinforcement learning in a new way AWS DeepRacer helps developers to get started with reinforcement learning by providing hands-on tutorials for training RL models and testing them in a fun way, with the car racing experience. It is easy to get started quickly, anywhere One can start training the model on the virtual track in minutes with the AWS DeepRacer console and 3D racing simulator irrespective of place or time. Idea sharing The DeepRacer League gives a platform to developers to meet fellow machine learning enthusiasts, online and also in-person to share ideas and insights. Also, it gives an opportunity to compete and win prizes. Developers will also get a chance to learn about reinforcement learning via workshops. No need to manually set up a software environment The 3D racing simulator and car provide an ideal environment for developers to test the latest reinforcement learning algorithms. With DeepRacer, developers don’t have to manually set up a software environment, simulator or configure a training environment. Public reaction to AWS DeepRacer is mostly positive, however, a few have their doubts. Concerns range from  CPU time, SageMaker requirement, and shipping related queries. https://twitter.com/emurmur77/status/1067955546089607168 https://twitter.com/heri/status/1067927044418203648 https://twitter.com/mnbernstein/status/1067846826571706368 To know more about this news, check out Amazon’s official blog. Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Learning To Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning  
Read more
  • 0
  • 0
  • 12308

article-image-google-launches-live-transcribe-a-free-android-app-to-make-conversations-more-accessible-for-the-deaf
Natasha Mathur
06 Feb 2019
3 min read
Save for later

Google launches Live Transcribe, a free Android app to make conversations more accessible for the deaf

Natasha Mathur
06 Feb 2019
3 min read
Google announced a new and free Android app, called, Live Transcribe, earlier this week. Live Transcribe is aimed at making real-world conversations more accessible globally for deaf and Hard of Hearing (HoH) people. Live Transcribe, powered by Google Cloud, automatically captions conversations in real-time. It supports more than 70 languages and more than 80% of the world’s population. How does Live Transcribe work? Live Transcribe combines the results of extensive user experience (UX) research with sustainable connectivity to speech processing servers. To ensure that connectivity to these servers doesn’t cause excessive data usage, the team used cloud ASR (Automated Speech Recognition) for greater accuracy. Similarly, to reduce the network data consumption required by Live Transcribe, an on-device neural network-based speech detector was implemented. https://www.youtube.com/watch?v=jLCwjIaPXwA   The on-device neural network-based speech detector is built using Google’s dataset for audio event research, called AudioSet, announced last year. AudioSet is an image-like model that is capable of detecting speech, automatically managing network connections to the cloud ASR engine, and minimizing data usage over long periods of use. Additionally, the Google team partnered with Gallaudet University to make Live Transcribe intuitive, with the help of user experience research collaborations. This, in turn, would ensure that the core user needs are satisfied while maximizing the app’s potential. Google considered different devices ranging from computers, tablets, smartphones, and small projectors, etc., to effectively display auditory information and captions. After rigorous analysis, Google decided to choose smartphones because of its ” sheer ubiquity” and enhanced capabilities. Addressing transcription confidence level issue Google mentions that while building Live Transcribe, they faced a challenge regarding displaying transcription confidence. The researchers explored if they needed to show word-level or phrase-level confidence, as it was traditionally considered to be helpful. Using previous UX research, they found out that a transcript is easiest to read when it is not layered and focuses on the better presentation of the text, thus supplementing it with other auditory signals apart from speech signals. Another useful UX signal is the noise level of the current environment and to address this, researchers built an indicator that visualizes the volume of user speech relative to background noise. This helps provide users instant feedback on microphone performance, allowing them to adjust the placement of the phone. What next? To enhance the capabilities of this mobile-based automatic speech transcription service, researchers plan to include on-device recognition, speaker-separation, and speech enhancement. “Our research with Gallaudet University shows that combining it with other auditory signals like speech detection and a loudness indicator makes a tangibly meaningful change in communication options for our users”, state the researchers. Google has currently rolled out the test version of Live Transcribe on Play Store, and it has been pre-installed on all Pixel 3 devices with the latest update. Public reaction to the news has been largely positive, with people appreciating the newly released app: https://twitter.com/MattWilliams84/status/1092510959988629505 https://twitter.com/iamAbhisarW/status/1092642493504589826 https://twitter.com/seanmarnold/status/1092508455200587776 For more information, check out the official Live Transcribe blog. Transformer-XL: A Google architecture with 80% longer dependency than RNNs Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research Google Cloud Firestore, the serverless, NoSQL document database, is now generally available
Read more
  • 0
  • 0
  • 12283

article-image-google-faces-multiple-scrutiny-from-the-irish-dpc-ftc-and-an-antitrust-probe-by-us-state-attorneys-over-its-data-collection-and-advertising-practices
Savia Lobo
09 Sep 2019
5 min read
Save for later

Google faces multiple scrutiny from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices

Savia Lobo
09 Sep 2019
5 min read
Google has been under scrutiny for its questionable data collection and advertising practices in recent times. Google has been previously hit by three antitrust fines by the EU, with a total antitrust bill amount of around $9.3 billion, till date. Today, more than 40 state attorney generals will launch a separate antitrust investigation targeting Google and its advertising practices. Last week, evidence from an investigation on how Google uses secret web pages to collect user data and expose this information to targeted advertisers were submitted to the Irish Data Protection Commission, who is the main watchdog over Google in the European Union. Also, based on an investigation launched into YouTube by the Federal Trade Commission earlier this year, Google and YouTube have been fined a penalty of $170M to settle allegations that it broke federal law by collecting children's personal information via YouTube Kids. Over 40 State Attorneys General open up antitrust investigations into Google The state watchdogs are initiating antitrust investigations against Silicon Valley’s largest companies, including Google and Facebook, probing whether they undermine rivals and harm consumers, according to The Washington Post. Today, more than 40 attorneys general are expected to launch a separate antitrust investigation targeting Google and its advertising practices subject to the US Supreme Court. Details of this investigation are unknown; however, according to The Wall Street Journal, the attorneys will focus on Google’s impact on digital advertising markets. On Friday, New York’s attorney general, Letitia James also announced that the attorneys general of eight states and the District of Columbia are launching an antitrust investigation into Facebook. https://twitter.com/NewYorkStateAG/status/1169942938023071744 Keith Ellison, attorney general from Minnesota who is signing on to the effort to probe Google, said, “The growth of these [tech] companies has outpaced our ability to regulate them in a way that enhances competition.” We will update this space once the antitrust investigations into Google are initiated. Irish DPC to investigate whether Google secretly feeds users’ data to advertisers An investigation done by Johnny Ryan, chief policy officer for the web browser, Brave, revealed that Google used hidden secret web pages to collect user data and create profiles exposing users personal information to targeted advertisers. In May the DPC opened an investigation into Google's Authorized Buyers real-time bidding (RTB) ad exchange. This exchange connects ad buyers with millions of websites selling their inventory. Ryan filed a GDPR complaint in Ireland over Google's RTB system in 2018, arguing that Google and ad companies expose personal data during RTB bid requests on sites that use Google's behavioral advertising. In his recent evidence, Ryan discovered the secret web pages when he monitored the trading of his personal data on Google’s ad exchange, Authorized Buyers. He found that Google “had labelled him with an identifying tracker that it fed to third-party companies that logged on to a hidden web page. The page showed no content but had a unique address that linked it to Mr Ryan’s browsing activity,” The Financial Times reports. Google allowed the advertisers to combine information about him through hidden "push" pages, which are not visible to web users and could lead to them more easily identifying people online, the Telegraph said. "This constant leaking of personal data, that seems to be happening constantly, needs to be urgently addressed by regulators," Ryan told the Telegraph. He said that “the data compiled by users can then be shared by companies without Google's knowledge, allowing them to more easily build and keep virtual profiles of Google's users without their consent,” the Telegraph further reported. To know about this story, read our detailed coverage of Brave’s findings: “Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case”.  FTC scrutiny leads to Google and YouTube paying $170 million penalty for violating Children’s online privacy In June this year, the Federal Trade Commission (FTC) launched an investigation into YouTube over mishandling children’s private data. The investigation was triggered by complaints from children’s health and privacy groups, which said, YouTube improperly collected data from kids using the video service, thus violating the Children’s Online Privacy Protection Act, a 1998 law known as COPPA that forbids the tracking and targeting of users younger than age 13. Also Read: FTC to investigate YouTube over mishandling children’s data privacy On September 4, the FTC said that YouTube, and its parent company, Google will pay a penalty of $170 million to settle allegations. YouTube said in a statement on Wednesday last week that in four months it would begin treating all data collected from people watching children’s content as if it came from a child. “This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service,” YouTube said on its blog. FTC Chairman Joe Simons said, “No other company in America is subject to these types of requirements and they will impose significant costs on YouTube.” According to Reuters, “FTC’s Bureau of Consumer Protection director Andrew Smith told reporters that the $170 million settlement was based on revenues from data collected, times a multiplier.”  New York Attorney General Letitia James said, “Google and YouTube knowingly and illegally monitored, tracked, and served targeted ads to young children just to keep advertising dollars rolling in.” In a separate statement, Simons and FTC Commissioner Christine Wilson said the settlement will require Google and YouTube to create a system "through which content creators must self-designate if they are child-directed. This obligation exceeds what any third party in the marketplace currently is required to do." To know more about this news in detail, read FTC and New York Attorney General’s statement. Other interesting news Google open sources their differential privacy library to help protect user’s private data What can you expect at NeurIPS 2019? Key Skills every Database Programmer should have
Read more
  • 0
  • 0
  • 12282

article-image-unity-and-deepmind-partner-to-develop-virtual-worlds-for-advancing-artificial-intelligence
Sugandha Lahoti
27 Sep 2018
2 min read
Save for later

Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence

Sugandha Lahoti
27 Sep 2018
2 min read
Unity has announced its collaboration with Deepmind to develop virtual environments for advancing Artificial Intelligence. They will be creating virtual environments for developing and testing experimental algorithms. This announcement is basically a broad agreement between the two companies with not much information disclosed about their actual intentions at this point. Unity is the most widely-used real-time development platform, powering 60% of all AR/VR content and 50% of all mobile games worldwide. With this partnership, they are taking the initial steps toward becoming the general platform for the development of intelligent agents and creating simulation environments. These virtual environments will be used to generate and capture synthetic data for different automotive and industrial verticals. Unity has been exploring Artificial Intelligence for quite some time now. Earlier this month, they released a new version of their ML-Agents toolkit to more easily integrate ML-Agents environments into their training workflows among other things. They also have a TensorFlow based algorithm to allow game developers to easily train intelligent agents for 2D, 3D, and VR/ AR games. These trained agents are then used for controlling the NPC behavior within games. DeepMind is also not new to games. Demis Hassabis, co-founder and CEO of DeepMind, says “Games and simulations have been a core part of DeepMind’s research programme from the very beginning and this approach has already led to significant breakthroughs in AI research.” In 2016, Deepmind’s AlphaGo emerged as the victor in a Go match scoring 4-1 after defeating South Korean Go champion, Lee Sedol. Another one of their programs, AlphaGo Zero perfected its Go and Chess skills simply by playing against itself iteratively. Alongside its work to train AI agents for playing games, DeepMind has also developed AI for spotting over 50 sight-threatening eye diseases and has recently developed Dopamine, a Tensorflow-based framework for Reinforcement Learning. Why DeepMind made Sonnet open source Key Takeaways from the Unity Game Studio Report 2018 Best game engines for Artificial Intelligence game development
Read more
  • 0
  • 0
  • 12276
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-quietly-deleted-10-million-faces-from-ms-celeb-the-worlds-largest-facial-recognition-database
Fatema Patrawala
07 Jun 2019
4 min read
Save for later

Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database

Fatema Patrawala
07 Jun 2019
4 min read
Yesterday the Financial Times reported that Microsoft has quietly deleted its facial recognition database. More than 10 million images that were reportedly being used by companies to test their facial recognition software has been deleted. The database known as MS Celeb, was the largest public facial recognition dataset in the world. The data was amassed by scraping images off the web under a Creative Commons license that allows academic reuse of photos. According to Microsoft Research’s paper on the database, it was originally designed to train tools for image captioning and news video analysis. The existence of this database was revealed by Adam Harvey, a Berlin-based artist and researcher. Harvey’s team investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies. The Financial Times ran an in-depth investigation that revealed that giant tech companies like IBM and Panasonic, and Chinese firms such as SenseTime and Megvii, as well as military researchers, were using the massive database to test their facial recognition software. And now Microsoft has quietly taken MS Celeb down. “The site was intended for academic purposes,” Microsoft told  FT.com, explaining that they had deleted it, because “it was run by an employee that is no longer with Microsoft and has since been removed.” Microsoft itself has used the data set to train facial recognition algorithms, Mr Harvey’s investigation found. The company named the data set “Celeb” to indicate that the faces it had scraped were photos of public figures. But Mr Harvey found that the dataset also included several arguably private individuals, including security journalists such as Kim Zetter, Adrian Chen and Shoshana Zuboff, the author of Surveillance Capitalism, and Julie Brill, the former FTC commissioner responsible for protecting consumer privacy. “Microsoft has exploited the term ‘celebrity’ to include people who merely work online and have a digital identity,” said Mr Harvey. “Many people in the target list are even vocal critics of the very technology Microsoft is using their name and biometric information to build.” Tech experts have also anticipated that Microsoft might have deleted the data due to the violation of the EU’s General Data Protection Law by continuing to distribute the MS Celeb dataset after the regulations came into effect last year. But Microsoft said it was not aware of any GDPR implications and that the site had been retired “because the research challenge is over”. Engadget also reported that after the FT‘s investigation, datasets built by researchers at Duke University and Stanford University were also taken down. According to Fast Company, last year Microsoft’s president, Brad Smith, spoke about fears of such technology that is creeping into everyday life and eroding our civil liberties along the way. It also turned down a facial recognition contract with California law enforcement on human rights grounds. While it may claim it wants regulation for facial recognition, but it may also want to use facial recognition technology to sell items listed on its grocery app Kroger and has eluded privacy-related scrutiny for years. Although the database has been deleted, it is still available to researchers and companies that had previously downloaded it. Once the dataset has been posted online, and people download it, it does exist with them. https://twitter.com/jacksohne/status/1136975380387172355 And now that it is completely free from any licensing, rules or controls which Microsoft previously owned. People are posting it on GitHub, hosting the files on Dropbox and Baidu Cloud, and there is no way from stopping them to continue to post it and use it for their own purposes. https://twitter.com/sedyst/status/1136735995284660224 Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity Microsoft open sources SPTAG algorithm to make Bing smarter! Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users
Read more
  • 0
  • 0
  • 12242

article-image-introducing-azure-databricks-spark-analytics-comes-cloud
Savia Lobo
16 Nov 2017
3 min read
Save for later

Introducing Azure Databricks: Spark analytics comes to the cloud

Savia Lobo
16 Nov 2017
3 min read
Microsoft recently announced the stirring combination of Apache Spark Analytics platform and the Azure cloud at the Microsoft Connect();. Presenting, Azure Databricks! Azure Databricks is a close collaboration between Microsoft and Databricks to bring about benefits not present in any other cloud platforms. Azure Databricks: The trinity effect This is the very first time that an Apache Spark platform provider ’Databricks’ has partnered with a cloud provider ‘Microsoft Azure’ to bring about a highly optimized platform for data analytics workloads. Data management on the cloud has opened up pathways in the field of Artificial Intelligence, predictive analytics, and for real-time applications. Apache Spark has been everyone’s favorite platform to implement these cutting-edge analytics applications, due to its vast community and a worldwide enterprise network. Its ability to run powerful analytics algorithms at scale, allows businesses to derive real-time insights with ease. However, the management and deployment of Spark within the enterprise use cases, which includes a large number of users and has strong security requirements, was a bit challenging. Azure Databricks comes as a solution to this, by providing business users with a platform to work effectively with the data professionals --data scientists and data engineers. Benefits of Azure Databricks: A sneak-peek   Highly optimized for a cost-efficient and improved performance in the cloud, with an added end-to-end, managed Apache Spark platform. Includes features such as one-click deployment, autoscaling, and an optimized Databricks Runtime that can improve the performance of Spark jobs in the cloud by 10-100x. A simple and cost-efficient implementation of large-scale Spark workloads. Includes an interactive notebook environment, along with a few monitoring tools, and security controls that make it easy to leverage Spark in enterprises with a huge number of users. Optimized connectors to Azure storage platforms (e.g. Data Lake and Blob Storage) for fast data access. A one-click management directly from the Azure console. It even includes common analytics libraries, such as the Python and R data science stacks, pre-installed to use them with Spark in order to derive insights. The partnership Architecture: Source: https://azure.microsoft.com/en-us/blog/a-technical-overview-of-azure-databricks/ Azure Databricks has an architecture which allows customers to effectively and easily connect Azure Databricks to any of the storage resource present in their account. For instance, an existing subscription of  Blob store or Data Lake. Also, the Databricks is centrally managed through the Azure control center. Hence, it requires no additional setup. Fully integrated Azure features Azure Databricks has been appended to the best of Microsoft Azure features. Some of them are listed below: A secure and private data control where the ownership rights are with the customer alone Diversity in the network infrastructure needs Integration of the Azure Storage and Azure Data Lake An Azure Active Directory, which provides control of access to resources used An integration of the Azure SQL Data Warehouse, Azure SQL DB, and Azure CosmosDB The latest generation of Azure hardware (Dv3 VMs), with NvMe SSDs with 100us latency on IO, making Databricks I/O performance much better Azure has many other features that have been integrated into the Azure Databricks. For a more detailed overview of Azure Databricks you can visit the official link here.
Read more
  • 0
  • 0
  • 12233

article-image-australias-facial-recognition-and-identity-system-can-have-chilling-effect-on-freedoms-of-political-discussion-the-right-to-protest-and-the-right-to-dissent-the-guardian-r
Bhagyashree R
09 Nov 2018
5 min read
Save for later

Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report

Bhagyashree R
09 Nov 2018
5 min read
On Wednesday, The Guardian reported that various civil rights groups and experts are warning that using the near real-time matching of citizens’ facial images risks a profound chilling effect on protest and dissent. The facial recognition system is capable of rapidly matching pictures of people captured on CCTV with their photos stored in government records to detect criminals and identity theft. What is this facial recognition and identity system? Last year in October, the Australian government agreed on establishing a National Facial Biometric Matching Capability and signed an Intergovernmental Agreement on Identity Matching Services. This system was aimed to make it easier for security and law enforcement agencies to identify suspects or victims of terrorism or other criminal activities and to combat identity crime. Under this agreement, agencies in all jurisdiction are allowed to use this new face matching service to access passport, visa, citizenship, and driver license images. The systems consist of two parts: Face Verification Service (FVS): This is a one-to-one, image-based verification service that matches a person’s photo against an image on one of the government records to help verify their identity. Face Identification Service (FIS): Unlike FVS, this is a one-to-many, image-based identification service that matches a photo of an unknown person against multiple government records to help to identify the person. What are some concerns the system poses? Since its introduction, the facial recognition and identity system has raised major concerns among academics, privacy experts, and civil rights groups. This system records and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offense. In a submission to the Parliamentary Joint Committee on Intelligence and Security, Professor Liz Campbell of Monash University points out that “the capability” breaches privacy rights. This system allows the collection, storage, and sharing of personal details from people who are not even suspected of an offense. According to Campbell, the facial recognition and identity system also prone to errors: "Research into identity matching technology indicates that ethnic minorities and women are misidentified at higher rates than the rest of the population.” On investigating FBI’s facial recognition and identity system, the US full house committee on oversight and government reform also found that the system has some inaccuracies: “Facial recognition technology has accuracy deficiencies, misidentifying female and African American individuals at a higher rate. Human verification is often insufficient as a backup and can allow for racial bias.” These inaccuracies are often because of the underlying algorithms, which are capable of identifying people who look more like its creators. For instance, in the British and Australian context, it is good at identifying white men. In addition to these inaccuracies, there are also concerns about the level of access given to private corporations and the legislation’s loose wording, which could allow it to be used for purposes other than combating criminal activities. Lesley Lynch, the deputy president of NSW Council for Civil Liberties believes that these systems will have an ill effect on our freedom of political discussion: “It’s hard to believe that it won’t lead to pressure, in the not too distant future, for this capability to be used in many contexts, and for many reasons. This brings with it a real threat to anonymity. But the more concerning dimension is the attendant chilling effect on freedoms of political discussion, the right to protest and the right to dissent. We think these potential implications should be of concern to us all.” What the supporters are saying? Despite these concerns, New South Wales is in favor of the capability and is legislating to allow state driver’s licenses to be shared with the commonwealth and investing $52.6m over four years to facilitate its rollout. Samantha Gavel, the NSW’s privacy commissioner said that the facial recognition and identity system has been designed with “robust” privacy safeguards. Gavel said that the system is developed in consultation with state and federal privacy commissioners, and she expressed confidence in the protections limiting access by private corporations: “I understand that entities will only have access to the system through participation agreements and that there are some significant restraints on private sector access to the system.” David Elliott, NSW Minister for Counter-Terrorism said that the system will help prevent identity theft and there will be a limit to its use. Mr. Elliott said in state parliament: "People will not be charged for jaywalking just because their facial biometric information has been matched by law enforcement agencies. The Government will make sure that members of the public who have a driver license are well and truly advised that this information and capability will be introduced as part of this legislation. I am an avid libertarian when it comes to freedom from government interference and [concerns] have been forecasted and addressed in this legislation." To read the full story head over to The Guardian’s official website. Google’s new facial recognition patent uses your social network to identify you! Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial]
Read more
  • 0
  • 0
  • 12231

article-image-google-microsoft-twitter-and-facebook-team-up-for-data-transfer-project
Richard Gall
21 Jul 2018
2 min read
Save for later

Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project

Richard Gall
21 Jul 2018
2 min read
Marvel's Infinity War might well have been 'the most ambitious crossover in history' but there's a new crossover that might just beat it. In a blog post Google has revealed that it's working with Microsoft, Facebook, and Twitter on something called 'Data Transfer Project'. The Data Transfer Project is, according to Google, "an open source initiative dedicated to developing tools that will enable consumers to transfer their data directly from one service to another, without needing to download and re-upload it." Essentially, the product is about making data more portable for users. For anyone that has ever tried to move data from one source to another, that could save some massive headaches. Standardizing and securing data with the Data Transfer Project The tools being developed by Google, Microsoft, Facebook and Twitter, should be able to transform a proprietary API into a standardized format. Google explains that "this makes it possible to transfer data between any two providers using existing industry-standard infrastructure and authorization mechanisms, such as OAuth." Tools for adapting data from 7 different services and 5 different data formats have already been developed. With trust and security being two key issues for consumers in terms of tech, Google was also to keen to point out how the Data Transfer Project is fully committed to data security: "Services must first agree to allow data transfer between them, and then they will require that individuals authenticate each account independently. All credentials and user data will be encrypted both in transit and at rest. The protocol uses a form of perfect forward secrecy where a new unique key is generated for each transfer. Additionally, the framework allows partners to support any authorization mechanism they choose. This enables partners to leverage their existing security infrastructure when authorizing accounts." Google urges the developer community to get involved. You can find the source code for the project here and learn more about the history of the project in a white paper here. Read next: Why Twitter (finally!) migrated to Tensorflow 5 reasons government should regulate technology Microsoft’s Brad Smith calls for facial recognition technology to be regulated
Read more
  • 0
  • 0
  • 12215
article-image-mozilla-introduces-track-this-a-new-tool-that-will-create-fake-browsing-history-and-fool-advertisers
Amrata Joshi
27 Jun 2019
4 min read
Save for later

Mozilla introduces Track THIS, a new tool that will create fake browsing history and fool advertisers

Amrata Joshi
27 Jun 2019
4 min read
Most of us somewhere worry about our activities getting tracked on the internet, remember the last time you got the ads based on your interests or based on your browsing history and you start thinking as to ‘if at all I am getting tracked’? Most of our activities are getting tracked by the web through cookies that make a note of things such as language preferences, websites visited by the user and much more. But the problem gets doubled when even data brokers and advertising networks use these cookies to collect user information without the consent. In this case, users need to have control over what advertisers know about them. This month the team at Mozilla Firefox announced the Enhanced Tracking Protection that is by default in flagship Firefox Quantum browser against third-party cookies. In addition to this two days ago, the team also announced the launch of a project called Track THIS, a tool that can help you fool the advertisers. Track THIS opens up 100 tabs that are crafted to fit a specific character which includes a hypebeast, a filthy rich person, a doomsday prepper, or an influencer.  The users’ browsing history will be depersonalized in a way that advertisers will struggle targeting ads to the users as the tool will confuse them. Track This will show users the ads for the products that they might not be interested in, users will still continue to see ads but not the targeted ones.  The official blog post reads, “Let’s be clear, though. This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person. You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits. If you’d rather straight-up block third-party tracking cookies, go ahead and get Enhanced Tracking Protection in Firefox.” Let’s now understand the working of Track THIS  Before trying Track THIS, users need to manage their tabs and save their work or they can open up a new window or browser to start the process. Track THIS will itself open 100 tabs. Users then need to choose a profile to trick advertisers into thinking that a user is someone else Users need to confirm that they are ready to open 100 tabs based on that profile. Users then need to close all 100 tabs and open up a new window. The ads will only be impacted for a few days but ad trackers can soon start reflecting users’ normal browsing habits. Once done with experimenting, users can get Firefox with Enhanced Tracking Protection to block third-party tracking cookies by default. It seems users are excited about this news as they will be able to get rid of targeted advertisements. https://twitter.com/minnakank/status/1143863045447458816 https://twitter.com/inthecompanyof/status/1143842275476299776 Few users are scared of using the tool on their phones and are a little skeptical about the 100 tabs. A user commented on HackerNews, “I'm really afraid to click one of those links on mobile. Does it just spawn 100 new tabs?” Another user commented, “Not really sure that a browser should allow a site to open 100 tabs programmatically, if anything this is telling me that Firefox is open to such abuse.” To know more about this news, check out the official blog post. Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild Mozilla to bring a premium subscription service to Firefox with features like VPN and cloud storage Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features  
Read more
  • 0
  • 0
  • 12213

article-image-google-announces-new-artificial-intelligence-features-for-google-search
Sugandha Lahoti
25 Sep 2018
5 min read
Save for later

Google announces new Artificial Intelligence features for Google Search on its 20th birthday

Sugandha Lahoti
25 Sep 2018
5 min read
At the” Future of Search” event held at San Francisco yesterday, Google celebrated its 20th anniversary announcing a variety of new features to Google Search engine. Their proprietary search engine uses sophisticated machine learning, computer vision, and data science. The focus of this event was Artificial Intelligence and making new features available on a smartphone. Let’s look at what all was announced. Activity Cards on Google Discover Perhaps the most significant feature, is the Google Discover which is a completely revamped version of Google Feed. Google Feed is the content discovery news feed available in the dedicated Google App and on the Google’s homepage. Now, it has got a new look, brand, and feel. It features more granular controls over content that appears. A new feature called activity cards will show up in a user’s search results if they've done searches on one topic repetitively. This activity card will help users pick up from where they left off their Google Search. Users can retrace their steps to find useful information that they have found earlier and might not remember which sites had that. Google Discover starts with English and Spanish in the U.S. and will expand to more languages and countries soon. Collections in Google Search Collections in Google Search can help users keep track of content they have visited, such as a website, article, or an image, and quickly get back to it later. Users can now add content from an activity card directly to Collections. This makes it easy to keep track of and organize the content to be revisited. Dynamic organization with Knowledge graph Users will see more videos and fresh visual content, as well as evergreen content—articles and videos that aren’t new to the web, but are new to you. This feature uses the Topic Layer in the Knowledge graph to predict level of expertise on a topic for a user and help them further develop those interests. The Knowledge Graph can intelligently show relevant content, rather than prioritizing chronological order. The content will appear based on user engagement and browsing history. The Topic Layer is built by analyzing all the content that exists on the web for a given topic and develops hundreds and thousands of subtopics. It then looks for patterns to understand how these subtopics relate to each other, to explore the next content a user may want to view. AMP Stories Google will now use Artificial Intelligence to create AMP stories, which will appear in both Google Search and image search results. AMP Stories is Google’s open source library that enables publishers to build web-based flipbooks with smooth graphics, animations, videos, and streaming audio. Featured Videos The next enhancement is Featured Videos that will semantically link to subtopics of searches in addition to top-level content. Google will automatically generate preview clips for relevant videos, using AI to find the most relevant parts of the clip. Google Lens Google has also improved its image search algorithm due to which images will now be sorted by the relevance of the web results they correspond to. Image search results will also contain more information about the pages that they come from. They also announced Google Lens, its visual search tool for the web. Lens in Google Images will analyze and detect objects in snapshots and show relevant images. Better SOS Alerts Google is also updating their SOS Alerts on Google Search and Maps with AI. They will use AI and significant computational power to create better forecasting models that predict when and where floods will occur. This information is also intelligently incorporated into Google Public Alerts. Improve Job Search with Pathways They are also improving their job search with AI by introducing a new feature called Pathways. When someone searches for jobs on Google, they will be shown jobs available right now in their area, but also be provided with information about effective local training and education programs. To learn in detail about where Google is headed next, read their blog Google at 20. Policy changes in Google Chrome sign in, an unexpected surprise gift from Google The team also announced a policy change in Google's popular Chrome browser which was not taken well. Following this, the browser automatically logs users into Chrome using other Google services. This has got people worried about their privacy as this can lead to Google tracking their browsing history and collecting data to target them with ads. Prior to this unexpected change, it was possible to sign in to a Google service, such as Gmail, via the Chrome browser without actually logging in to the browser itself. Adrienne Porter Felt, engineer, and manager at Google Chrome has however clarified the issue. She said that the Chrome browser sign in does not mean that Chrome is automatically sending your browsing history to your Google account. She further added, “My teammates made this change to prevent surprises in a shared device scenario. In the past, people would sometimes sign out of the content area and think that meant they were no longer signed into Chrome, which could cause problems on a shared device.” Read the Google clarification report on the Reader app. The AMPed up web by Google. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Pay your respects to Inbox, Google’s email innovation is getting discontinued.  
Read more
  • 0
  • 0
  • 12206

article-image-baidu-apollo-autonomous-driving-vehicles-gets-machine-learning-based-auto-calibration-system
Bhagyashree R
03 Sep 2018
2 min read
Save for later

Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system

Bhagyashree R
03 Sep 2018
2 min read
The Apollo community has built a machine-learning based auto-calibration system for autonomous driving vehicles. By August 2018, the system had been tested on more than two thousand hours with around ten thousands kilometers’ (6,213 miles) road tests and has proven to be effective. The system is automated and intelligent, due to which, it is suitable for mass-scale self-driving vehicle deployment. Why was Apollo auto-calibration system introduced? Following are the main issues that the current system faces: Manual calibration is time consuming and error prone: The performance and safety of an autonomous driving vehicle depend on its control module. This module includes control algorithms that require vehicle dynamics as input and then sends command to manipulate the vehicle. Performing this calibration in real-time is difficult, that is why, most of the research-oriented autonomous vehicles do manual calibration in one-by-one fashion. Manual calibration consumes a lot of time and is prone to man-made mistakes. Variation in vehicle dynamics: While driving the vehicle dynamics change (i.e. loads change, vehicle parts will be worn out over time, surface friction), and manual calibration cannot possibly cover them. How does Apollo auto-calibration system work? The auto-calibration system depends on the Apollo control module, which consists of an offline model and online learning algorithm Offline model First, a calibration table is generated based on human driving data that best reflects vehicle longitudinal performance at the time of driving. It performs three functions: Collects human driving data Preprocesses the data and select input features Generates calibration table through machine learning models Online learning The online algorithm updates the offline table based on real-time feedback in self-driving mode. It tries to best match the current vehicle dynamics based on offline model established from manual driving data. It performs the following functions: Collects vehicle status and feedback in real time Preprocesses and filter data Adjusts calibration table accordingly To know more details on how this model works and helps to solve the manual calibration problem, check out their published paper: Baidu Apollo Auto-Calibration System - An Industry-Level Data-Driven and Learning based Vehicle Longitude Dynamic Calibrating Algorithm. Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’ Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives Tesla is building its own AI hardware for self-driving cars  
Read more
  • 0
  • 0
  • 12201
article-image-oakland-privacy-advisory-commission-lay-out-privacy-principles-for-oaklanders-and-propose-ban-on-facial-recognition-tech
Amrata Joshi
30 Apr 2019
5 min read
Save for later

Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition tech

Amrata Joshi
30 Apr 2019
5 min read
Privacy issues are now becoming a matter of concern, with Silicon Valley coming under the radar every now and then, and lawmakers taking a stand for the user’s privacy, it seems a lot of countries are now making an effort in this direction. In the US, lawmakers have already started working on the lawsuits and regulations that violate consumer data privacy. Countries like California have taken steps towards issues related to privacy and surveillance. Perhaps last week, the Oakland Privacy Advisory Commission released 2 key documents, an initiative to protect Oaklanders’ privacy namely, Proposed ban on facial recognition and City of Oakland Privacy Principles. https://twitter.com/cfarivar/status/1123081921498636288 Proposal to ban facial recognition tech The committee has written this document which talks about the regulations on Oakland’s acquisition and use of surveillance technology. It has defined Face Recognition Technology “as an automated or semi-automated process that assists in identifying or verifying an individual based on an individual's face.” According to this document, it will be unlawful for any city staff to retain, obtain, request, access, or use any Face Recognition Technology or any information obtained from Face Recognition Technology. City staff’s unintentional receipt, access to, or use of any information that has been obtained from Face Recognition Technology shouldn’t violate the above. Provided that the city staff shouldn’t request or solicit its receipt, access to, or use of such information. Unless the city staff logs such access to, receipt, or use in its Annual Surveillance Report. Oakland privacy principles laid out by the committee The Oakland Privacy Advisory Commission has listed few principles with regards to user’s data privacy for Oaklanders. Following are the privacy principles: Design and use equitable privacy practices According to the first principle, community safety and access to city services shouldn’t be welcomed at the cost of any Oaklander’s right to privacy. They aim to collect information in a way that won’t discriminate against any Oaklander or Oakland community. Whenever possible, the alternatives to the collection of personal data will be communicated at the time of data collection. Limit collection and retention of personal information According to this principle, personal information should be collected and stored only when and for as long as is justified for serving the purpose of collecting it in the first place. Information related to Oaklanders’ safety, health, or access to city services should be protected. Oaklanders views on collection of information will be considered by the Commission. Manage personal information with diligence Oaklanders’ personal information should be treated with respect and handled with care, regardless of how or by whom it was collected. For maintaining the security of the systems, the software and applications that interact with Oaklanders’ personal information are regularly updated and reviewed by the Commission. The personal information gathered from different departments will be combined when there is a need. According to the Oakland Privacy Advisory Commission, encryption, minimization, deletion, and anonymization can reduce misuse of personal information. The aim of the Commission is to create effective use of these tools and practices. Extend privacy protections to our relationships with third parties According to the Oakland Privacy Advisory Commission, the responsibility to protect Oaklanders’ privacy should be extended to the vendors and partners. Oaklanders’ personal information should be shared by the Commission with third parties only to provide city services, and only when doing so is consistent with these privacy principles. The Commission will disclose the identity of parties with whom they share personal information, once the law permits to do so. Safeguard individual privacy in public records disclosures According to the Commission, providing relevant information to interested parties about their services and governance is essential to democratic participation as well as civic engagement. The Commission will protect Oaklanders’ individual privacy interests and the City’s information security interests and will still preserve the fundamental objective of the California Public Records Act for encouraging transparency. Be transparent and open The Commission states that Oaklanders’ right to privacy is open to access and understand explanations of why and how they collect, use, manage, and share personal information. And they aim to communicate these explanations to Oakland communities in plain and accessible language on the City of Oakland website. Be accountable to Oaklanders The Commission publicly reviews and discusses departmental requests for acquiring and using technology that can be used for surveillance purposes. The Commission further encourages Oaklanders to share their views and concerns regarding any system or department that collects and uses their personal information or has the potential to do so. And the Commission allows Oaklanders to share their views on their compliance with these Principles. Well, it seems Oakland has clearly signalled that development at the cost of Oaklanders’ privacy won’t be unacceptable, there is still a long race to go for the cities around the world with respect to their user privacy laws. Russia opens civil cases against Facebook and Twitter over local data laws Microsoft says tech companies are “not comfortable” storing their data in Australia thanks to the new anti-encryption law Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available  
Read more
  • 0
  • 0
  • 12171

article-image-rigetti-quantum-computing-algorithm-unsupervised-machine-learning
Sugandha Lahoti
19 Dec 2017
2 min read
Save for later

Rigetti develops a new quantum algorithm to supercharge unsupervised Machine Learning

Sugandha Lahoti
19 Dec 2017
2 min read
Rigetti Computing, a startup based in Fremont, California, is on the mission to build the world’s most powerful computer. It is a full-stack quantum computing company which uses quantum mechanics to solve machine learning and artificial intelligence problems. In order to update classical computing resources, and traditional learning techniques, Rigetti has hybridized these practices with quantum processing abilities. They have trained a 19-qubit gate model processor to solve a clustering problem. Clustering is a machine-learning technique used to organize data into similar groups and is a foundational challenge in unsupervised learning. Their 19Q quantum computer, available through its cloud computing platform, Forest, uses a quantum approximate optimization algorithm. The Forest platform is used for controlling the quantum computer and accessing the data it generates. This novel algorithm is combined with a gradient-free Bayesian optimization to train the quantum machine. This hybridization relies on Bayesian optimization of classical parameters within the quantum circuit. It reaches an optimal solution in fewer steps than would otherwise be expected by drawing cluster assignments uniformly at random. The runtime for 55 Bayesian optimization steps with N = 2500 measurements per step is approximately 10 minutes. Rigetti’s algorithm was able to reach the optimum in fewer than 55 steps (only about 25% of runs did not reach the optimum within 55 steps). This algorithm can also be applied to other combinatorial optimization problems such as image recognition and machine scheduling. Rigetti’s demonstration uses the largest number of qubits as compared to any other algorithm in a gate-based quantum processor. The algorithm showed robustness to realistic noise. The entire algorithm is implemented in Python, leveraging the pyQuil library for describing parameterized quantum circuits in the quantum instruction language Quil. The Bayesian optimizer is provided by the open source package BayesianOptimization, also written in Python. The above demonstration is just a basic example of how quantum computers can help solve machine learning problems. Hybrid approaches like this one form the basis of valuable applications for the first quantum computers. However, beating the best classical benchmarks will require more qubits and better performance. Apart from working on developing new algorithms for quantum computing in machine learning, Rigetti Computing builds hardware and software to store and process quantum information. You can learn more about their research on their blog.
Read more
  • 0
  • 0
  • 12166
Modal Close icon
Modal Close icon