Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-shareholders-sue-alphabets-board-members-for-protecting-senior-execs-accused-of-sexual-harassment
Natasha Mathur
11 Jan 2019
5 min read
Save for later

Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment

Natasha Mathur
11 Jan 2019
5 min read
Alphabet shareholder, James Martin, filed a lawsuit yesterday, against Alphabet’s board of directors, Larry Page, Sergey Brin, and Eric Schmidt for covering up the sexual harassment allegations against the former top execs at Google and for paying them large severance packages. As mentioned in the lawsuit, Martin sued the company for breaching its fiduciary duty to shareholders, unjust enrichment, abuse of power, and corporate waste. “The individual defendants breached their duty of loyalty and good faith by allowing the defendants to cause, or by themselves causing, the company to cover up Google’s executives’ sexual harassment, and caused Google to incur substantial damage”, reads the lawsuit. The lawsuit, filed at the San Mateo County court, San Francisco, seeks major changes to Google’s corporate governance. It calls for the non-management shareholders to nominate three new candidates for election to the board, and elimination of current dual class structure of the stock, which in turn, would take away the majority of the voting share from Page and Brin. It wants the former Google executives to repay the severance packages, benefits, and other compensation that they received from Google. Additionally, it also seeks for the Alphabet directors to pay for the punitive damages caused to Alphabet due to their engagement in corporate waste. Apart from the lawsuit filed by Martin, Alphabet’s board got hit with another lawsuit this week, on behalf of the two additional pension funds, the Northern California Pipe Trades Pension Plan and Teamsters Local 272 Labor Management Pension Fund, who own the Alphabet stock. The lawsuit makes similar allegations like the one filed by Martin, accusing Alphabet’s board members of ‘breaching their fiduciary duties by rewarding male harassers’, and ‘hiding the Google+ breach from the public’. The news of Google paying its top execs outsized exit packages first came to light back in October 2018, when the New York Times shared its investigation on sexual misconduct at Google. It alleged that Google had protected Andy Rubin, creator of Android, and Amit Singhal, ex-senior VP, Google search, among other senior execs after they were accused of sexual misconduct. Google reportedly paid $90 million as an exit package to Rubin along with a well-respected farewell. Similarly, Singhal was asked to resign in 2016 after accusations of him groping a female employee at an offsite event surfaced in 2005. As per the NY times report, Singhal received an exit package that paid him millions. However, both, Rubin and Singhal, denied the accusations. As a part of their response to Google’s handling of sexual misconduct, over 20,000 Google employees along with vendors, and contractors organized Google “walkout for real change” and walked out of their offices back in November 2018 to protest against the discrimination, racism, and sexual harassment encountered within Google. The employees laid out five demands as part of the Google walkout, including an end to forced arbitration in case of discrimination and sexual harassment for employees, among others. In response to the walkout, Google eliminated its forced arbitration policy in cases of sexual harassment, a step that was soon followed by Facebook, who also eliminated its forced arbitration policy. Sundar Pichai, CEO, Google, wrote a note that where he admitted that he’s ‘sincerely sorry’ and hopes to bring more transparency around sexual misconduct allegations. The ‘Google walkout for real change’ Medium page responded to the lawsuit today, stating that they agree with the shareholders and “anyone who enables abuse, harassment and discrimination must be held accountable, and those with the most power have the most to account for”. The response also states that currently, a small group of “mostly white” male executives makes decisions at Google that significantly impact workers and the world with “little accountability”. “We have all the evidence we need that Google’s leadership does not have our best interests at heart. We need to change the way the system works, above and beyond addressing the wrongs of those who work within the system,” reads the post. The lawsuit filed by Martin partly relies on non-public evidence i.e. Alphabet’s board meetings in 2014 (concerns Rubin) and 2016 (concerns Singhal), that shows the board members discussing severance packages for Rubin and Singhal. However, this part was heavily redacted from the public on Google’s demand. Both the meetings, the full board meeting along with the leadership development and compensation committee meeting, are covered, in the evidence that shows approved payments to Rubin. The lawsuit states that Google directors agreed to pay Rubin because they wanted to ‘ensure his silence’. This is because Google feared that if they fired him for cause, then he would publicly reveal all the details of sexual harassment and other wrongdoings within Google. Moreover, Google also asked the victims of sexual harassment to keep quiet once they found out that the sexual assault allegations were credible. “When Google covers up harassment and passes the trash, it contributes to an environment where people don’t feel safe reporting misconduct. They suspect that nothing will happen or, worse, that the men will be paid and the woman will be pushed aside”, quotes the lawsuit. For more coverage, check out the full suit filed by Martin and the two pension funds. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment
Read more
  • 0
  • 0
  • 16075

article-image-amazon-introduces-partiql-a-sql-compatible-unifying-query-language-for-multi-valued-nested-and-schema-less-data
Bhagyashree R
02 Aug 2019
3 min read
Save for later

Amazon introduces PartiQL, a SQL-compatible unifying query language for multi-valued, nested, and schema-less data

Bhagyashree R
02 Aug 2019
3 min read
Yesterday, Amazon introduced a new SQL-compatible query language named PartiQL, which is a “unifying query language” that allows you to query data regardless of the database type and vendor. Amazon has open-sourced the language’s lexer, parser, and, compiler under the Apache 2.0 license. The open-source implementation also provides an interactive shell or Read Evaluate Print Loop (REPL) using which you can quickly write and evaluate PartiQL queries. Why PartiQL is introduced Amazon’s business requires querying and transforming huge amounts and types of data that are not just limited to SQL tabular data but also nested and semi-structured data. The tech giant wants to make its relational database services like Redshift capable of accessing non-relational data while maintaining backward-compatibility with SQL. To address these requirements, Amazon created PratiQL that enables you to query data across a variety of formats and services in a simple and consistent way. Here’s a diagram depicting the basic idea behind PartiQL: Source: Amazon Many Amazon services are already using PratiQL including Amazon S3 Select, Amazon Glacier Select, Amazon Redshift Spectrum, Amazon Quantum Ledger Database (Amazon QLDB), and Amazon internal systems. Advantages of using PartiQL PartiQL is fully compatible with SQL: You will not have much trouble adopting PartiQL as it is fully compatible with SQL. All the existing queries that you are familiar with will work in SQL query processors that are extended to provide PartiQL. Works with nested data: PartiQL treats nested data as a first-class citizen of the data abstraction. Its syntax and semantics enable users to “comprehensively and accurately access and query nested data.” Format and datastore independent: PartiQL allows you to write the same query for all data formats as its syntax and semantics are not tied to a specific data format. To enable this behavior, the query operates on a logical type system that maps to diverse formats. Because of its expressiveness, you can use it with diverse underlying datastores. Optional schema and query stability: You do not require to have a predefined schema over a dataset. It is built to work with engines that are schemaless or assume the presence of a schema. Requires minimal extensions: It requires a minimum number of extensions as compared to SQL. These extensions for multi-valued, nested, and schema-less combine seamlessly with joining, filtering, and aggregation, and windowing capabilities of standard SQL. To know more in detail, check out the official announcement by Amazon. #WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE Ex-Amazon employee hacks Capital One’s firewall to access its Amazon S3 database; 100m US and 60m Canadian users affected Amazon Transcribe Streaming announces support for WebSockets
Read more
  • 0
  • 0
  • 16071

article-image-microsoft-previews-quantum-computing-development-kit-q-programming-language-quantum-simulator
Abhishek Jha
12 Dec 2017
8 min read
Save for later

“The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#?

Abhishek Jha
12 Dec 2017
8 min read
The quantum economy is coming “I've seen things you people wouldn't believe” – Few people loved to hear what the replicant Roy Batty (played by Rutger Hauer) had to say in its legendary death speech. But while the 1982-released Blade Runner showed us a world where everything that could go wrong had gone wrong, the monologue retains its level of disbelief in 2017. Albeit from the other end. The promise of an astonishing future has a positive undertone this time! If the artificial intelligence, the Internet of Things, and the Self-driving cars were not all, the big daddy of all, ‘Quantum Computing’ is underway. It’s not yet a fad, but craze for such a marvel abounds. Every inch we move towards Quantum Computing (it’s acceleration, stupid!) the future looks stupefying. And now with Microsoft releasing its own quantum programming language and a development kit, it’s one hell of an opportunity to live in a time when quantum computing nears a possibility. Which is a different ball game. The moment you hear about quantum computing, you forget about linear algebra. [embed width="" height=""]https://www.youtube.com/watch?v=doNNClTTYwE[/embed] A giant leap forward, Quantum Computing is set to alter our economic, industrial, academic, and societal landscape forever. In just hours or days, a quantum computer can solve complex problems that would otherwise take billions of years for classical computing to solve. This has massive implications for research in healthcare, energy, environmental systems, smart materials, and more. What is inside Microsoft’s quantum development kit? Microsoft had already announced its plans to release a new programming language for quantum computers at its Ignite Conference this year. That time the company said the launch might come sometime by the end of 2017. That day has come. And Microsoft is previewing a free version of its Quantum Development Kit. The kit includes all of the pieces a developer needs to get started including a Q# programming language (yesteryears programmers like me will pronounce it “Q Sharp”) and compiler, a Q# library, a local quantum computing simulator, a quantum trace simulator and a Visual Studio extension. So, basically the preview is aimed at early adopters who want to understand what it takes to develop programs for quantum computers. Introducing Q# Microsoft describes Q# as “a domain-specific programming language used for expressing quantum algorithms. It is to be used for writing sub-programs that execute on an adjunct quantum processor, under the control of a classical host program and computer.” If you remember, there was a statement from Satya Nadella at the Ignite announcement that while developers could use the proposed language on classical computers to try their hand at developing quantum apps, in future, they will be writing programs that actually run on topological quantum computers. Consider this as the unique selling point of Q#! “The beauty of it is that this code won’t need to change when we plug it into the quantum hardware,” Krysta Svore, who oversees the software aspects of Microsoft’s quantum work, said. And just in case you wish to learn how to program a quantum computer using Q# language, you’d find yourself at home if you’re acquainted with Microsoft Visual Studio. Q# is “deeply integrated” with the same. Besides, Q# has several elements of C#, Python, and F# engrained along with new features specific to quantum computing. Quantum Simulator Part of Microsoft’s development kit is quantum simulator that will allow developers to figure out if their algorithms are actually feasible and can run on a quantum computer. It lets programmers test the software on a traditional desktop computer or through its Azure cloud-computing service. You can simulate a quantum computer of about 30 logical qubits on your laptop (so, you don’t have to rely on some remote server). If you simulate more than 40 logical qubits, you can use an Azure-based simulator. Remember Microsoft is competing with the likes of Google and IBM to create real-life quantum computers that are more powerful than a handful of qubits. So the simulator allowing developers to test programs and debug code with their own computers is necessary, since there really aren't any quantum computers for them to test their work on yet. When Microsoft would be able to create a general-purpose quantum computer, the applications created via this kit would be supported. B y offering the more powerful simulator – one with over 40 logical qubits of computing power – through its Azure cloud, Microsoft is somehow giving a hint that it envisions a future where customers use Azure for both classical and quantum computing. New tutorials and libraries In addition to Q# programming language and the simulator, the development kit includes companion collection of documentation, libraries and sample programs. A number of tutorials and libraries are supplied to help developers experiment with the new paradigm. This may help them get a better foothold on the complex science behind quantum computing, and develop familiarity with aspects of computing that are unique to quantum systems, such as quantum teleportation. That’s a method of securely sharing information across quantum computing bits, or qubits, that are connected by a quantum state called entanglement. “The hope is that you play with something like teleportation and you get intrigued,” Krysta said. Microsoft is using a ‘different design’ for its topological quantum computer Microsoft is still trying to build a working machine. But it is using a very different approach that will make its technology less error-prone and more suitable for commercial use. The tech pioneer is pursuing a novel design based on controlling an elusive particle called a Majorana fermion, a concept that was almost unheard of. Engineers have almost succeeded in controlling the Majorana fermion in a way that will enable them to perform calculations, Todd Holmdahl, head of Microsoft’s quantum computing efforts, said, adding that Microsoft will have a quantum computer on the market within five years. These systems push the boundaries of how atoms and other tiny particles work. While traditional computers process bits of information as 1s or 0s, quantum machines rely on "qubits" that can be a 1 and a zero at the same time. So two qubits can represent four numbers simultaneously, and three qubits can represent eight numbers, and so on. This means quantum computers can perform calculations much faster than standard machines and tackle problems that are way more complex. Theoretically, a topological quantum computer is designed in a way that will create more stable qubits. This could produce a machine with an error rate from 1,000 to 10,000 times better than computers other companies are building, according to Holmdahl, who led the development of Xbox and the company's HoloLens goggles. Researchers have only been able to keep qubits in a quantum state for fractions of a second. When qubits fall out of a quantum state they produce errors in their calculations, which can negate any benefit of using a quantum computer. The lower error rate of Microsoft's design may mean it can be more useful for tackling real applications – even with a smaller number of qubits – perhaps less than 100. Interestingly, Krysta said that her team has already proven mathematically that algorithms that use a quantum approach can speed up machine learning applications substantially – enabling them to run as much as 4,000 times faster. The future is quantum Make no mistake. The race for quantum computing is already flared up. To an extent that rivals Google and IBM are competing to achieve what they call quantum supremacy. At the moment, IBM holds the pole position with its 50 qubit prototype (at least for now until Google reveals its cards). But with Microsoft coming up with its own unique architecture, it’s difficult to underplay Redmond’s big vision. During its Ignite announcement, the company stressed on a “comprehensive full-stack solution” for controlling the quantum computer and writing applications for it. That means it is in no hurry. “We like to talk about co-development,” Krysta had said. “We are developing those [the hardware and software stack] together so that you’re really feeding back information between the software and the hardware as we learn, and this means that we can really develop a very optimized solution.” The technology is still undergoing a long research phase, but the prospects are brighter. Brought online, quantum computing could singularly transform unreal things right into the real world use cases. Going from “a billion years on a classical computer to a couple hours on a quantum computer” has taken decades of research. And unlike the Blade Runner, all those moments will not be lost in time like tears in the rain.
Read more
  • 0
  • 0
  • 16067

article-image-mozilla-drops-meritocracy-from-its-revised-governance-statement-and-leadership-structure-to-actively-promote-diversity-and-inclusion
Natasha Mathur
09 Oct 2018
6 min read
Save for later

Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion

Natasha Mathur
09 Oct 2018
6 min read
Mozilla spoke out about using the word “meritocracy”, as a way to describe its governance and leadership structures, last week. Mozilla has decided to discontinue using the word “meritocracy” following the revised proposal last week. “We have been thinking about the words we use as important carriers of our intended culture and the culture we wish to see in the broader movements we participate in. We have, therefore, taken up a review of our language and practices”, writes Emma Irwin on the Mozilla diversity blog. The first line of Mozilla’s governance reads, “Mozilla is an open source project governed as a meritocracy.” Meritocracy refers to a structure governed by people selected according to merit. On the surface, this sounds ideal, but given its abused usage, it had become analogous to an insidious way of maintaining exclusivity to those with access and experience. Using the term “meritocracy” to refer to communities suffering from a “lack of diverse representation” and without “equal opportunity” seems biased thereby perpetuating a dominant monoculture. Emma states that about 20 years ago, “meritocracy” was the best practice among open source projects. However, now this concept has become more linked to hidden bias and outright abuse. This bias involves devaluing a person’s contributions based on certain aspects of their identity such as gender, race, etc. Many advocates for women and minorities in the tech world have spoken out about how the tech industry considers men to be better at their jobs than everyone else. They don’t take into account the fact that many men have had more opportunities to succeed than their minority and female counterparts. A general belief in the open source community is that those contributing the most, have the most merit and are most deserving. Those who contribute less are not as “meritorious”, ignoring the fact that they might have less access to opportunity, time, and money, preventing them from contributing freely. This is the reality of the open source “meritocracies”, and one of the driving factors behind less diverse culture among the open source community. As a result, Patrick Finch, former strategist, Mozilla, and Emma Irwin, Open Project & Communities Specialist, Mozilla, revised the proposal to better articulate the major principle behind Mozilla’s governance statement. The revised proposal states: Mozilla is an open source project. Our community is structured as a virtual organization. Authority is primarily distributed to both volunteer and employed community members as they show their ability through contributions to the project. The project also seeks to debias this system of distributing authority through active interventions that engage and encourage participation from diverse communities. Mitchell Baker, Mozilla’s co-founder, and chair accepted the revised proposal. She said: “The original meaning I took for meritocracy in open source meant empowering individuals, rather than managers, or manager’s managers or tenure-based authority. However, it’s now clear that so-called meritocracies have included effective forms of discrimination. I personally long for a word that conveys a person’s ability to demonstrate competence and expertise and commitment separate from the job title, or college degree, or management hierarchy, and to be evaluated fairly by one’s peers”. Mozilla is not the only one who has decided to step down from using meritocracy. GitHub’s CEO Chris Wanstrath literally removed the rug at Github’s San Francisco headquarters, back in 2014. The rug kept as a centerpiece at the office room was emblazoned with the phrase, “United Meritocracy of GitHub”. Another example of organizations taking a step to promote diversity is Python dumping the offensive ‘master’, ‘slave’ terms in its documentation. Also, in September, Linux dropped its meritocracy based Code of Conflict in favor of a new Code of Conduct just as Linus Torvalds took a break from Linux to work on his behavioral issues. This new Code of Conduct has been at the epicenter of a major culture clash within the Linux community of late. “I long for a word that makes it clear that each individual who shares our mission is welcome, and valued, and will get a fair deal at Mozilla – that they will be recognized and celebrated for their contributions without regard to other factors. Sadly, “meritocracy” is not the word it once was. The challenge is not to retain a word that has become tainted. The challenge is to build teams and culture and systems that are truly inclusive”, says Baker. People’s opinions about Mozilla’s decision are varied: “Well, meritocracy means we will try to select and give influence/recognition based on merit. When someone tries to push an idea like "let's have more Asians" or "let's have more men" in the project we can direct them to the very first statement where they see we do selection based on merit and not skin color, gender or sexual orientation. Any attempt to introduce gender/race/sexual orientation quotas will promptly be rejected on those basis. There is value in wording it like this beyond "let's do good", reads a comment by “bluecalm” on Hacker News. “The real issue here is that some people get really upset by the word "meritocracy". Clearly, you have never dealt with a white man who is better off than certain women, or people from ethnic minorities, and thinks it's because his being white/male makes him inherently better and invokes "meritocracy" all the freaking time. I have. And I'm a half-Asian male - I can't imagine what it's like to deal with that jerk if I were a woman or an ethnic minority he didn't consider inherently intelligent. When the word "meritocracy" is used to structurally shut down debates of sexism and racism, it is no longer about meritocracy”, reads a comment by “vanderZwan”  on Hacker News. “How is this relevant? If someone is voicing dissent about a culture there's a million barriers they can hide behind instead of facing the allegations ("we're committed to working towards a more diverse and inclusive environment..."). Are we to cower away from any word that can be twisted to justify bad things as well as good things, for fear that people will abuse them? The word "meritocracy" doesn't seem like a line of code specifying an objective action that is to be taken to solve a problem, it describes a vision of how someone would ideally like their company to operate. How about we just call out bad behavior when we see it and not let it pollute the vision? That's what needs to happen, or we'll be locked in this battle until one political group manages to suppress the others (that, by the way, seem to have the exact same end goals they do)”, reads a comment by “nyxxie” on Hacker News. For more information, check out the official Mozilla diversity blog. Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls Mozilla updates Firefox Focus for mobile with new features, revamped design, and Geckoview for Android Mary Meeker, one of the premier Silicon Valley investors, quits Kleiner Perkins to start her own firm
Read more
  • 0
  • 0
  • 16064

article-image-after-red-hat-homebrew-removes-mongodb-from-core-formulas-due-to-its-server-side-public-license-adoption
Vincy Davis
03 Sep 2019
3 min read
Save for later

After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption

Vincy Davis
03 Sep 2019
3 min read
In October, last year MongoDB announced that it’s switching to Server Side Public License (SSPL). Since then, Redhat dropped support for MongoDB in January from its Red Hat Enterprise Linux and Fedora. Now, Homebrew, a popular package manager for macOS has removed MongoDB from the Homebrew core formulas since MongoDB was migrated to a non open-source license. Yesterday, FX Coudert, a Homebrew member announced this news on Twitter. https://twitter.com/fxcoudert/status/1168493202762096643 In a post on GitHub, Coudert clearly mentions that MangoDB’s migration to ‘non open-source license’ is the reason behind this resolution. Since, SSPL is not OSI-approved, it cannot be included in homebrew-core. Also, mongodb and mongodb@3.6 do not build from source on any of the 3 macOS versions, so they are also removed along with mongodb 3, 3.2, and 3.4. He adds that it would make little sense to keep older, unmaintained versions. Coudert also added that the percona-server-mongodb which also comes under the SSPL is removed from the Homebrew core formulas. Upstream continues to maintain the custom Homebrew “official” tap for the latest versions of MongoDB. Earlier, Homebrew project leader, MikeMcQuaid had commented on Github that MongoDB was their 45th most popular formula and should not be removed as it will break things for many people. Coudert countered this by replying that since MongoDB is not open source anymore, it does not belong in Homebrew core. He added, that since upstream is providing a tap with their official version, users can have the latest (instead of our old unmaintained version). “We will have to remove it at some point, because it will bit rot and break. It's just a question of whether we do that now, or keep users with the old version for a bit longer,” he specified. MongoDB’s past controversies due to SSPL In January this year, MongoDB received its first major blow when Red Hat dropped MongoDB over concerns related to its SSPL. Tom Callaway, the University outreach Team lead at Red Hat had said that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be “Free” or “Open Source” causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk.” Subsequently, in February, Red Hat Satellite also decided to drop MongoDB and support PostgreSQL backend only. The Red Hat development team stated that PostgreSQL is a better solution in terms of the types of data and usage that Satellite requires. In March, following all these changes, MongoDB withdrew the SSPL from the Open Source Initiative’s approval process. It was finally decided that SSPL will only require commercial users to open source their modified code, which means that any other user can still modify and use MongoDB code for free. Check this space for new announcements and updates regarding Homebrew and MongoDB. Other related news in Tech How to ace a data science interview Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 16055

article-image-google-renewable-energy-paper
Amey Varangaonkar
12 Oct 2018
3 min read
Save for later

Google moving towards data centers with 24/7 carbon-free energy

Amey Varangaonkar
12 Oct 2018
3 min read
It comes as no surprise to most that Google have been one of the largest buyers of renewable energy. Over 2017 alone, Google have purchased over 7 billion kilowatt-hour (kWh) from solar panels and wind farms designed especially for their electricity consumption. In light of the IPCC 6 Climate Change report which was released just a couple of days back, Google have also released a paper discussing their efforts regarding their 24/7 carbon-free energy initiative. What does the Google paper say In line with their promise of moving towards a future driven by carbon-free energy, Google’s paper discusses the steps Google are taking to reduce their carbon footprint. Key aspects discussed in this paper, aptly titled ‘Moving toward 24x7 Carbon-Free Energy at Google Data Centers: Progress and Insights’, are: Google’s framework for using 24/7 carbon-free energy How Google are currently utilizing carbon-free energy to power their data centers across different campuses situated all over the world. Finland, North Carolina, Netherlands, Iowa, and Taiwan are some of the examples where this is being achieved. Analysis of the power usage currently and how the insights derived can be used in their journey ahead Why Google is striving for adopting a carbon-free strategy Per Google, they have been carbon-neutral since 2007, and met their goal of matching all of their global energy consumption with renewable energy. Considering the scale of Google’s business and the size of their existing infrastructure, they have always been a large consumer of electricity. Google’s business expansion plans in the near future too, in turn, could have direct effects on the environmental footprint. As such, their strategy of 24/7 carbon-free energy makes complete sense. According to Google, “Pursuing this long-term objective is important for elevating carbon-free energy from being an important but limited element of the global electricity supply portfolio today, to a resource that fully powers our operations and ultimately the entire electric grid.” This is a positive and important step by Google towards building a carbon-free future with more dependence on renewable energy sources. It will also encourage other organizations of similar scale to adopt a similar approach to reduce carbon emissions. Microsoft, for example, have already pledged a 75% reduction of their carbon footprint in the environment by 2030. Oracle have also increased their solar power usage as a part of their plan to reduce their carbon emissions. Read more: Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Google’s new Privacy Chief officer proposes a new framework for Security Regulation Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 16021
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-tableau-day-highlights-augmented-analytics-tableau-prep-builder-and-conductor-and-more
Savia Lobo
10 May 2019
4 min read
Save for later

‘Tableau Day’ highlights: Augmented Analytics, Tableau Prep Builder and Conductor, and more!

Savia Lobo
10 May 2019
4 min read
The Tableau community held a Tableau Day in Mumbai, India, yesterday. The community announced some upcoming and exciting developments in Tableau. Some of the highlights of the Tableau Day include an in-depth explanation of the new Tableau Prep Builder and Conductor, how Tableau plans to move on to Augmented Analytics, and many others. The conference also included a customer story from Nishtha Sharma, Manager at Times Internet, who shared how Tableau helped Times Internet in optimizing their sales, revenue, managing cost per customer, and business predictions with the help of Tableau Dashboards. She further said, Times Internet was solving around 10 business problems with 7 dashboards initially; however, due to success with Tableau initially, they are now solving close to 30 business cases with 15 dashboards. Let us have a look at some of the highlights below. Augmented Analytics: The next step for Tableau Varun Tandon, Tableau Solution consultant explained how Tableau is adopting intelligent or Augmented Analytics. Tableau may be moving into adopting augmented analytics for its platform where ML and AI can be used to enhance data access and data quality, uncover previously hidden insights, suggest analyses, deliver predictive analytics and suggest actions, and a lot of other tasks. A lot of users came up with questions and speculations based on Tableau’s acquisition of Empirical Systems last year and whether Ask Data, Tableau’s new natural language capability, a feature included in Tableau 2018.2, was a result of the same. The representatives confirmed the acquisition and also mentioned that Tableau is planning on building analytics and augmented analytics within Tableau without the need for additional third-party add-ons. However, they did not clarify if Ask Data was a result of Empirical System’s acquisition. With Empirical’s NLP module, Tableau users may easily gain insights, make better data-driven decisions, and explore many more features without knowledge of data science or query languages. Doug Henschen, a technology analyst at ConstellationR in his report, “Tableau Advances the Era of Smart Analytics” explored the smart features that Tableau Software has introduced and is investing in and how these capabilities will benefit Tableau customers. Creating a single Hub for data from various sources The conference explained in detail with examples of how Tableau can be used as a single Hub for data coming from various sources such as Netsuite, Excel, Salesforce, and so on. New features on Tableau Prep Builder and Conductor Tableau’s new Data Prep Builder and Conductor, which saves massive data preparation time, was also demonstrated and its new features were explained in detail, in this conference. Shilpa Bhatia, a Customer Consultant at Tableau Software, conducted this session. Questions were asked if Tableau Prep Builder and Conductor would replace ETL. The representatives said that Data Prep does a good job with data preparation; however, users should not confuse it with ETL. They have called the Tableau Prep Builder and Conductor, a Mini ETL. Tableau is also coming up with monthly updates since the tool is still evolving and it will continue for the near future. A question on the ability to pull the data from Data Prep to Jupyter notebook for building data frames was also asked. However, even this isn’t possible with the Prep Prep Builder and Conductor. They said Data Prep is extremely simple to use; however, it is a resource heavy tool and a dedicated machine with more than 16 GB RAM to will be needed to avoid system lag for large datasets. The self-service mode in Tableau Jayen Thakker, Sales Consultant at Tableau explained how one can go beyond dashboards with Tableau. He said, with the help of Tableau’s self-service mode, users can explore and build dashboards on their own without the need of waiting for the developer to build it for them.​ Upcoming Tableau versions The conference also revealed that Tableau 2019.2 is currently in Beta 2 and is expected to be released by the next month. Also, there will be a Beta 3 version before the final release. Each release of Tableau includes around 100 to 150 changes, and a couple of changes were also discussed including Spatial data (MakePoint and MakeLine), some next steps on how it will move beyond 'Ask Data' and will include advanced analytics and AI features, and so on. Tableau is also working on serving people who need more traditional reporting, the representatives mentioned. To know more about the ‘Tableau Day’ highlights from Mumbai, watch this space or visit Tableau’s official website. Alteryx vs. Tableau: Choosing the right data analytics tool for your business Tableau 2019.1 beta announced at Tableau Conference 2018 How to do data storytelling well with Tableau [Video]
Read more
  • 0
  • 0
  • 16017

article-image-survey-reveals-how-artificial-intelligence-is-impacting-developers-across-the-tech-landscape
Richard Gall
13 Sep 2018
2 min read
Save for later

Survey reveals how artificial intelligence is impacting developers across the tech landscape

Richard Gall
13 Sep 2018
2 min read
The hype around artificial intelligence has reached fever pitch. It has captured the imagination - and stoked the fears - of the wider public, reaching beyond computer science departments and research institutions. But when artificial intelligence dominates the international conversation, it's easy to forget that it's not simply a thing that exists and develops itself. However intelligent machines are, and however adept they are at 'learning', it's essential to remember they are things that are engineered - things that are built by developers. That's the thinking behind this year's AI Now survey. To capture the experiences and perspectives of developers and to better understand the impact of artificial intelligence on their work and lives. Key findings from Packt's artificial intelligence survey Launched in August, and receiving 2,869 responses from developers working in every area, from cloud to cybersecurity, the survey had some interesting findings. These include... 69% of developers aren’t currently using AI enabling tools in their day to day role. But 75% of respondents said they were planning on learning AI enabling software in the next 12 months. TensorFlow is the tool defining AI development - 27% of respondents listed it as their top tool in their to-learn list. 75% of developers believe automation will have either a positive or significant positive impact on their career. 47% of respondents believe AGI will be a reality within the next 30 years The biggest challenges for developers in terms of AI are having the time to learn new skills and knowing which frameworks and tools to learn Internal data literacy is the biggest challenge for AI implementation As well as as quantitative results, the survey also produced  qualitative insights from developers. This provided some useful and unique perspectives on artificial intelligence. One developer, talking about bias in AI, said that: “As a CompSci/IT professional I understand this is a more subtle manifestation of ‘Garbage In/Garbage Out”. As an African American, I have significant concerns about say, well documented bias in say criminal sentencing being legitimized because ‘the algorithm said so’.” To read the report click here. To coincide with the release of survey results Packt is also running a $10 sale on all eBooks and videos across their website throughout September. Visit the Packt eCommerce store to start exploring.
Read more
  • 0
  • 0
  • 16001

article-image-anaconda-5-3-0-released-takes-advantage-of-pythons-speed-and-feature-improvements
Melisha Dsouza
03 Oct 2018
2 min read
Save for later

Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements

Melisha Dsouza
03 Oct 2018
2 min read
The Anaconda team announced the release of Anaconda Distribution 5.3.0 in a blog post yesterday. Harnessing the speed of Python, learning and performing data science and machine learning is all the more easy in this new update. Here is the list of new features in Anaconda 5.3.0 #1 Utilising Python’s speed Anaconda Distribution 5.3 is compiled with Python 3.7, in addition to Python 2.7 Anaconda installers and Python 3.6 Anaconda metapackages. This will ensure the new update takes full advantage of Python’s speed and feature improvements. #2 Better CPU performance Users deploying TensorFlow can make use of the Intel Math Kernel Library 2019 for Deep Neural Networks (MKL 2019) included in this upgrade. These Python binary packages will ensure high CPU performance. #3 Better Reliability The team has improved the reliability by capturing and storing package metadata for installed packages. The additional metadata is used by the package cache to efficiently manage the environment while storing the patched metadata used by the conda solver. #4 New packages added There are over 230 packages which have been updated or added by the team. #5 Work in Progress on the casting bug The team is working on the casting bug in NumPy with Python 3.7 and the patch is in progress until NumPy is updated. To know more about this release, you can head over to the full release notes for the Distribution. Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API
Read more
  • 0
  • 0
  • 15996

article-image-pytorch-based-hyperlearn-statsmodels-aims-to-implement-a-faster-and-leaner-gpu-sklearn
Melisha Dsouza
04 Sep 2018
3 min read
Save for later

PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn

Melisha Dsouza
04 Sep 2018
3 min read
HyperLearn is a Statsmodel, a result of the collaboration of languages such as PyTorch, NoGil Numba, Numpy, Pandas, Scipy & LAPACK, and has similarities to Scikit Learn. This project started last month by Daniel Hanchen and still has some unstable packages. He aims to make Linear Regression, Ridge, PCA, LDA/QDA faster, which then flows onto other algorithms being faster. This Statsmodels combo incorporates novel algorithms to make it 50% more faster and enables it to use 50% lesser RAM along with a leaner GPU Sklearn. HyperLearn also has an embedded statistical inference measures, and can be called similar to a Scikit Learn's syntax (model.confidence_interval_) HyperLearn’s Speed/ Memory comparison There is a  50%+ improvement on Quadratic Discriminant Analysis (similar improvements for other models) as can be seen below: Source: GitHub Time(s) is Fit + Predict. RAM(mb) = max( RAM(Fit), RAM(Predict) ) Key Methodologies and Aims of the HyperLearn project #1 Parallel For Loops Hyperlearn for loops will include Memory Sharing and Memory Management CUDA Parallelism will be made possible through PyTorch & Numba #2 50%+ faster and leaner Matrix operations that have been improved include  Matrix Multiplication Ordering, Element Wise Matrix Multiplication reducing complexity to O(n^2) from O(n^3), reducing Matrix Operations to Einstein Notation and Evaluating one-time Matrix Operations in succession to reduce RAM overhead. Applying QR Decomposition and then SVD(Singular Value decomposition) might be faster in some cases. Utilise the structure of the matrix to compute faster inverse Computing SVD(X) and then getting pinv(X) is sometimes faster than pure pinv(X) #3 Statsmodels is sometimes slow Confidence, Prediction Intervals, Hypothesis Tests & Goodness of Fit tests for linear models are optimized. Using Einstein Notation & Hadamard Products where possible. Computing only what is necessary to compute (Diagonal of matrix only) Fixing the flaws of Statsmodels on notation, speed, memory issues and storage of variables. #4 Deep Learning Drop In Modules with PyTorch Using PyTorch to create Scikit-Learn like drop in replacements. #5 20%+ Less Code along with Cleaner Clearer Code Using Decorators & Functions wherever possible. Intuitive Middle Level Function names like (isTensor, isIterable). Handles Parallelism easily through hyperlearn.multiprocessing #6 Accessing Old and Exciting New Algorithms Matrix Completion algorithms - Non Negative Least Squares, NNMF Batch Similarity Latent Dirichelt Allocation (BS-LDA) Correlation Regression and many more!         Daniel further went on to publish some prelim algorithm timing results on a range of algos from MKL Scipy, PyTorch, MKL Numpy, HyperLearn's methods + Numba JIT compiled algorithms Here are his key findings on the HyperLearn statsmodel: HyperLearn's Pseudoinverse has no speed improvement HyperLearn's PCA will have over 200% improvement in speed boost. HyperLearn's Linear Solvers will be over 1 times faster i.e  it will show a 100% improvement in speed You can find all the details of the test on reddit.com For more insights on HyperLearn, check out the release notes on Github. A new geometric deep learning extension library for PyTorch releases! NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? Introduction to Sklearn
Read more
  • 0
  • 0
  • 15988
article-image-google-dissolves-its-advanced-technology-external-advisory-council-in-a-week-after-repeat-criticism-on-selection-of-members
Amrata Joshi
05 Apr 2019
3 min read
Save for later

Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members

Amrata Joshi
05 Apr 2019
3 min read
Last week Google announced the formation of Advanced Technology External Advisory Council, to help the company with the major issues in AI such as facial recognition and machine learning fairness. And it is only a week later that Google has decided to dissolve the council, according to reports by Vox. In a statement to Vox, a Google spokesperson said that “the company has decided to dissolve the panel, called the Advanced Technology External Advisory Council (ATEAC), entirely.” The company further added, “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” This news comes immediately after a group of Google employees criticized the selection of the council and insisted the company to remove Kay Coles James, the Heritage Foundation President for her anti-trans and anti-immigrant thoughts. The presence of James in the council had somewhere made the others uncomfortable too. When Joanna Bryson was asked by one of the users on Twitter, if she was comfortable serving on a board with James, she answered, “Believe it or not, I know worse about one of the other people.” https://twitter.com/j2bryson/status/1110632891896221696 https://twitter.com/j2bryson/status/1110628450635780097 Few researchers and civil society activists had also voiced their opinion against the idea of anti-trans and anti-LGBTQ.  Alessandro Acquisti, a behavioural economist and privacy researcher, had declined an invitation to join the council. https://twitter.com/ssnstudy/status/1112099054551515138 Googlers also insisted on removing Dyan Gibbens, the CEO of Trumbull Unmanned, a drone technology company, from the board. She has previously worked on drones for the US military. Last year, Google employees were agitated about the fact that the company had been working with the US military on drone technology as part of so-called Project Maven. A lot of employees decided to resign because of this reason, though later promised to not renew Maven. While talking more on the ethics front, Google has even offered resources to the US Department of Defense for a “pilot project” to analyze drone footage with the help of artificial intelligence. The question that arises here, “Are Googlers and Google’s shareholders comfortable with the idea of getting their software used by the US military?” President Donald Trump’s meet with the Google CEO, Sundar Pichai adds more to it. https://twitter.com/realDonaldTrump/status/1110989594521026561 Though this move by Google seems to be a mark of victory for more than 2300 Googlers and supporters who signed the petition and took a stand against Transphobia, it is still going to be a tough time for Google to redefine its AI ethics. Also, the company might have saved itself from this major turmoil if they had wisely selected the council members. https://twitter.com/EthicalGooglers/status/1113942165888094215 To know more about this news, check out the blog post by Vox. Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? Amazon joins NSF in funding research exploring fairness in AI amidst public outcry over big tech #ethicswashing
Read more
  • 0
  • 0
  • 15954

article-image-tableau-2018-1-brings-new-features-to-help-organizations-easily-scale-analytics
Sunith Shetty
24 Apr 2018
3 min read
Save for later

Tableau 2018.1 brings new features to help organizations easily scale analytics

Sunith Shetty
24 Apr 2018
3 min read
Tableau software has brought unique packages which combine new and existing analytical capabilities to scale data analytics across the organizations. With Tableau 2018.1, you can enable an effective data-driven enterprise by providing easy access to data among the entire workforce. Tableau is one of the leading business intelligence tools used to derive quality insights. With the remarkable growth of data that customers are experiencing, the demand to analyze and interact with data is the need of the hour. This is where Tableau’s range of products help in visually interacting with data to make critical decisions. Some of the noteworthy offerings available in Tableau 2018.1 are: Tableau Creator It provides full analytical capabilities to data analysts, BI professionals, and other power users. One can now take advantage of Tableau's suite of products to uncover data insights in a fast and effective way You can combine a range of products offered by Tableau for powerful data analytics on Web and Desktop. The products included in the suite are Tableau Desktop (No additional cost required), Tableau Prep (data preparation tool to help customers ready their data for analysis), and a license for Tableau Server (to publish and share reports and dashboards). Tableau Explorer Perform governed self-service data analytics to analyze data quickly with ease Collaborate with others based on governed data sources, create new dashboards, and get timely updates with new subscriptions and alerting. Tableau Viewer   This product enables you to extend the value of data across organizations in a cost-effective manner. Better data-driven decisions by interacting with dashboards and reports created by others. You will be able to view and filter dashboards, subscriptions, and data-driven alerts on mobile and Web. In addition to the above products, they have also released Tableau Prep - a new data preparation application, an improved version of Tableau Desktop, and new web authoring capabilities. These new tailored offerings allow people to leverage the power of data analytics in a way that is flexible, easy to wrap up, and simple to scale. You can now migrate your existing Tableau Server and Desktop installations to the new features offered, in the Tableau 2018.1 release. Once you have done the migration procedure, administrators will be able to assign a specific option -- Creator, Explorer, or Viewer -- to each user in their organization, thus completing the transition process. Read More Top 5 free Business Intelligence tools What Tableau Data Handling Engine has to offer Hands on Table Calculation Techniques with Tableau  
Read more
  • 0
  • 0
  • 15947

article-image-ibm-ceo-ginni-rometty-on-bringing-hr-evolution-with-ai-and-its-predictive-attrition-ai
Natasha Mathur
05 Apr 2019
4 min read
Save for later

IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI

Natasha Mathur
05 Apr 2019
4 min read
On Wednesday, CNBC held its At Work Talent & HR: Building the Workforce of the Future Conference in New York. Ginni Rometty, IBM CEO (also appointed to Trump’s American Workforce Policy Board) had a discussion around several strategies and announcements regarding job change due to AI and IBM’s predictive attrition AI. Rometty shared details about an AI that IBM HR department has filed a patent for, as first reported by CNBC. The AI is developed with Watson (IBM’S Q&A AI), for “predictive attrition Program”, which can predict at 95% of accuracy about which employees are about to quit. It will also prescribe remedies to managers for better employee engagement.  The AI retention tool is part of IBM products designed to transform the traditional approach to HR management. Rometty also mentioned that since IBM has implemented AI more widely, it has been able to reduce the size of its global human resources department by 30 percent. Rometty states that AI will be effective at tasks where HR departments and corporate managers are not very effective. It will keep employees on a clear career path and will also help identify their skills. Rometty mentions that many companies fail to be 100% transparent with the employees regarding their individual career path and growth which is a major issue. But, IBM AI will be able to better understand the data patterns and adjacent skills, which in turn, will help with identifying an individual’s strength. "We found manager surveys were not accurate. Managers are subjective in ratings. We can infer and be more accurate from data”, said Rometty. IBM also eradicated the annual performance review method. "We need to bring AI everywhere and get rid of the [existing] self-service system," Rometty said. This is because AI will now help the IBM employees better understand which programs they need to get growth in their career. Also, poor performance is no longer a problem as IBM is using "pop-up" solution centers that help managers seek the expected and better performance from their employees. "I expect AI to change 100 percent of jobs within the next five to 10 years," said Rometty. The need for “skill revolution” has already been an ongoing topic of discussion in different organizations and institutions across the globe, as AI keeps advancing. For instance, the Bank of England’s chief economist, Andy Haldane, gave a warning, last year, that the UK needs to skill up overall and across different sectors (tech, health, finance, et al) as up to 15 million jobs in Britain are at stake. This is because artificial intelligence is replacing a number of jobs which were earlier a preserve of humans. But, Rometty has a remedy to prevent this “technological unemployment” in the future. She says, “to get ready for this paradigm shift companies have to focus on three things: retraining, hiring workers that don't necessarily have a four-year college degree and rethinking how their pool of recruits may fit new job roles”. IBM also plans to invest $1 billion in training workers for “new collar” jobs, for workers with tech-skills will be hired without a four-year college degree. These "new collar" jobs could include working at a call center, app development, or a cyber-analyst at IBM via P-TECH (Pathways in Technology Early College High School) program. P-TECH is a six-year-long course that starts with high school and involves an associate's degree. Other measures by IBM include CTA apprenticeship Coalition program, which is aimed at creating thousands of new apprenticeships in 20 states in the US. These apprenticeships come with frameworks for over 15 different roles across fields including software engineering, data science and analytics, cybersecurity, creative design, and program management. As far as employers are concerned, Rometty advises to “bring consumerism into the HR model. Get rid of self-service, and using AI and data analytics personalize ways to retrain, promote and engage employees. Also, move away from centers of excellence to solution centers”. For more information, check out the official conversation with Ginni Rometty at CNBC @Work Summit. IBM sued by former employees on violating age discrimination laws in workplace Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 15945
article-image-tensorflow-js-0-11-1-releases
Sunith Shetty
21 May 2018
3 min read
Save for later

TensorFlow.js 0.11.1 releases!

Sunith Shetty
21 May 2018
3 min read
TensorFlow team has released a new version of TensorFlow.js - a browser-based JavaScript library - for training and deploying machine learning models. This new version 0.11.1 has brought notable features in their armory to ease WebGL accelerated browser-based machine learning. TensorFlow.js is an open source JavaScript library which allows you to build machine learning models in the browser. It provides you flexible and intuitive high-level APIs to build, train and run models from scratch. This means you can run and retrain pre-existing TensorFlow and Keras models right in the browser. Some of the noteworthy changes available in TensorFlow.js 0.11: Now you can save and load tf.models using various media - Thanks to the new capabilities added Browser IndexedDB Browser local storage HTTP requests Browser file downloads and uploads In order to know more about each medium used to save and load models in TensorFlow.js, you can refer the tutorials page. There are a set of new features added to both TensorFlow.js Core API and TensorFlow.js Layers API: TensorFlow.js Core API (0.8.3 ==> 0.11.0) TensorFlow.js Core API provides low-level, hardware-accelerated linear algebra operations. It also provides an eager API for carrying out automatic differentiation. Breaking changes From now on ES5 tf-core.js bundle users will have to use symbol tf instead of tfc Now you can export GPGPUContext and add getCanvas() to the WebGLBackend Performance and development changes They have optimized CPU conv2dDerInput on CPU to get 100x faster. Loading quantized weight support added to reduce the model size and improve model download time. New serialization infrastructure added to the core API New helper methods and basic types added to support model exporting New features added to the Core API Added tf.losses.logLoss support which allows you to add a log loss term to the training procedure They have also added tf.losses.cosineDistance which allows you to add a cosine-distance loss to the training procedure Added tensor.round() which rounds the value of a tensor to the nearest integer, element-wise. They have added tf.cumsum support which allows you to compute the cumulative sum of the tensor x along the axis. They have added tf.losses.hinge_loss support which allows you to add a hinge loss to the training procedure. For the complete list of new features, documentation changes, a plethora of bug fixes and other miscellaneous changes added to the Core API you can refer the release notes. TensorFlow.js Layers API (0.5.2 ==> 0.6.1) TensorFlow.js Layers API is a high-level machine learning model API built on TensorFlow.js Core. This API can be used to build, train and execute deep learning models in the browser. Breaking changes From now on, ES5 tf-core.js bundle users will have to use symbol tf instead of tfl They have removed the exporting of the backend symbols Changed default epochs to 1 in Model.fit () function Feature changes A new version string added to the keras_version field of JSONs from model serialization They have added tf.layers.cropping2D support which allows you to crop layer for 2D input (eg: image) For the complete list of documentation changes, bug fixes and other miscellaneous changes added to the Layers API you can refer the release notes. Emoji Scavenger Hunt showcases TensorFlow.js You can now make music with AI thanks to Magenta.js The 5 biggest announcements from TensorFlow Developer Summit 2018
Read more
  • 0
  • 0
  • 15923

article-image-transformer-xl-a-google-architecture-with-80-longer-dependency-than-rnns
Natasha Mathur
05 Feb 2019
3 min read
Save for later

Transformer-XL: A Google architecture with 80% longer dependency than RNNs

Natasha Mathur
05 Feb 2019
3 min read
A group of researchers from Google AI and Carnegie Mellon University announced the details regarding their newly proposed architecture, called, Transformer-XL (extra long), yesterday. It’s aimed at improving natural language understanding beyond a fixed-length context with higher self-attention. Fixed-length context is a long text sequence truncated into fixed-length segments of a few hundred characters. Researchers have used two methods to quantitatively study the effective lengths of Transformer-XL and the baselines, namely, segment-level recurrence mechanism and a relative positional encoding scheme. Let’s have a look at these key techniques in detail. Segment-level recurrence Recurrence mechanism helps address the limitations of using a fixed-length context. During the training process, the hidden state sequences computed in the previous segment are fixed and cached. These are then reused as an extended context once the model starts processing the next new segment.   Segment level recurrence This connection then further increases the largest possible dependency length by N times (N  being the depth of the network) as contextual information can flow across segment boundaries. The recurrence mechanism also resolves the context fragmentation issue. Moreover, with the help of recurrence mechanism applied to every two consecutive segments of a corpus, a segment-level recurrence is created in the hidden states. This, in turn, helps with effective context being utilized beyond the two segments. Apart from being able to achieve extra long context and resolving the fragmentation issue, recurrence mechanism also helps with significantly faster evaluation. Relative Positional Encodings Although the segment-level recurrence technique is effective, there is a technical challenge that involves reusing the hidden states. The challenge is to keep the positional information coherent while reusing the states. Applying segment-level recurrence does not work in this case does not work as the positional encodings are not coherent when reusing the previous segments. This is where the relative positional encoding scheme comes into the picture to make the recurrence mechanism possible. The relative positional encodings make use of fixed embeddings with learnable transformations instead of learnable embeddings. This makes it more generalizable to longer sequences at test time. The core idea behind the technique is to only encode the relative positional information in the hidden states. “Our formulation uses fixed embeddings with learnable transformations instead of learnable embeddings and thus is more generalizable to longer sequences at test time”, state the researchers. With both the approaches combined, Transformer-XL has a much longer effective context and is able to process the elements in a new segment without any recomputation. Results Transformer-XL obtains new results on a variety of major language modeling (LM) benchmarks. It is the first self-attention model that is able to achieve better results than RNNs on both character-level and word-level language modeling. It is able to model longer-term dependency than RNNs and Transformer. Transformer-XL has the following three benefits: Transformer-XL dependency is about 80% longer than RNNs and 450% longer than vanilla Transformers. Transformer-XL is up to 1,800+ times faster than a vanilla Transformer during evaluation of language modeling tasks as no re-computation is needed. Transformer-XL has better performance in perplexity on long sequences due to long-term dependency modeling, and on short sequences by resolving the context fragmentation problem. For more information, check out the official Transformer XL research paper. Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy Researchers introduce a machine learning model where the learning cannot be proved Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference
Read more
  • 0
  • 0
  • 15916
Modal Close icon
Modal Close icon