Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-ibm-oracle-under-the-scanner-again-for-questionable-hiring-and-firing-policies
Melisha Dsouza
21 Jan 2019
5 min read
Save for later

IBM, Oracle under the scanner again for questionable hiring and firing policies

Melisha Dsouza
21 Jan 2019
5 min read
The Guardian has come forward with reports of Oracle coming under the scanner for payscale discrimination between male and female employees. On the very same day, The Register reported an affidavit has been filed against IBM for hiding the age of employees being laid off from the company from the Department of Labour. Pay scale discrimination at Oracle “Women are getting paid less across the board. These are some of the strongest statistics I’ve ever seen – amazingly powerful numbers.” -Jim Finberg, attorney for the plaintiffs On 18th January, a motion was filed against Oracle in California that alleged the company’s female employees were paid (on average) $13,000 less per year than men doing similar work, The Guardian reports. More than 4,200 women will be represented in this motion after an analysis of payroll data found that women made 3.8% less in base salaries on average, 13.2% less in bonuses, and 33.1% less in stock value as compared to male employees. The analysis also found that the payment disparities exist even for women and men with the same tenure and performance review score in the same job categories! The complaint outlines several instances from Oracle female plaintiffs who noticed the discrepancies in payment either accidentally or by chance. One of the plaintiffs saw a pay stub from a male employee that drew her attention to the wage gap between them, especially since she was the male employee’s trainer. This is not the first time that Oracle is involved in a case like this. The Guardian reports that in 2017, the US Department of Labor (DoL) filed a suit against Oracle alleging that the firm had a “systemic practice” of paying white male workers more than their counterparts in the same job titles. This led to a pay discrimination against women and black and Asian employees. Oracle dismissed these allegations and called them “without merit” stating that its pay decisions were “non-discriminatory and made based on legitimate business factors including experience and merit”. Jim Finberg, the attorney for this suite, said that none of the named plaintiffs worked at Oracle any more. Some of them left due to their frustrations over discriminatory pay. The suite also mentions that disparities in pay scale were caused because Oracle used the prior salaries of new hires to determine their compensation at the company, leading to inequalities in pay. The suit claims that Oracle was aware of its discriminatory pay and “had failed to close the gap even after the US government alleged specific problems.” The IBM Layoff Along similar lines, a former senior executive at IBM alleges in an affidavit filed on Thursday in the Southern District of New York, that her superiors directed her to hide information about the older staff being laid off by the company from the US Department of Labor. Catherine Rodgers, formerly IBM's vice president in its Global Engagement Office was terminated after nearly four decades with IBM. The Register reports that Rodgers said she believes she was fired for raising concerns that IBM was engaged in systematic age discrimination against employees over the age of 40. IBM has previously been involved in controversies of laying off older workers right after the ProPublica report of March 2018 that highlighted this fact. Rodgers, who served as VP in IBM's global engagement office and senior state executive for Nevada had access to all the people to be laid off in her group. She noticed a lot of unsettling statistics like: 1. All of the employees to be laid off from her group were over the age of 50 2. In April 2017, two employees over age 50 who had been included in the layoff, filed a request for financial assistance from the Department of Labor under the Trade Assistance Act. The DoL sent over a form asking Rodgers to state all of the employees within her group who had been laid off in the last three years along with what their ages were. This list was then reviewed with the IBM HR, and Rodgers alleges she was “directed to delete all but one name before I submitted the form to the Department of Labor. 3. Rodgers said that IBM began insisting that older staff came into the office daily. 4. Older workers were more likely to face relocation to new locations across the US. Rodgers says that after she began raising questions she got her first ever negative performance review, in spite of meeting all her targets for the year. Her workload increased without a pay rise. The plaintiffs' memorandum that accompanied the affidavit requests the court to authorize the notification of former IBM employees around the US over 40 years and lost their jobs since 2017 that they can join the legal proceedings against the company. It is bothersome to see some big names of the tech industry displaying such poor leadership morales, should these allegations prove to be true. The outcome of these lawsuits will have a significant impact on the decisions taken by other companies for employee welfare in the coming years. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Pwn2Own Vancouver 2019: Targets include Tesla Model 3, Oracle, Google, Apple, Microsoft, and more!
Read more
  • 0
  • 0
  • 13919

article-image-how-microsoft-365-is-now-using-artificial-intelligence-to-smartly-enrich-your-content-in-onedrive-and-sharepoint
Melisha Dsouza
29 Aug 2018
4 min read
Save for later

How Microsoft 365 is now using artificial intelligence to smartly enrich your content in OneDrive and SharePoint

Melisha Dsouza
29 Aug 2018
4 min read
Microsoft is now using the power of artificial intelligence in OneDrive and SharePoint. With this initiative, users can be more productive, make more informed decisions, keep more secure and better search capabilities. The increasing pressure on employees to be more productive in less time is challenging, especially taking into account the ever-increasing digital content. Microsoft aims to ease some of this pressure by providing smart solutions to store content. Smart features provided by Microsoft 365 Be Productive #1 Video and audio transcription Beginning later this year, Microsoft aims to introduce automated transcription services to be natively available for video and audio files in OneDrive and SharePoint. This will use the same AI technology available in Microsoft Stream. A full transcript will be shown directly in the viewer alongside a video or while listening to an audio file. Thus improving accessibility and search. This will further help users collaborate with others to improve productivity and quality of work. The video once made can be uploaded and published to Microsoft Stream. AI comes into the picture by providing in-video face detection and automatic captions. Source: Microsoft.com #2 Searching audio, video, and images As announced last September, Microsoft has unlocked the value of photos and images stored in OneDrive and SharePoint. Searching images will now be a cake walk as the native, secure AI will determine the location where the photos were taken, recognize objects, and extract text in photos. Video and audio files also become fully searchable owing to the transcription services mentioned earlier.   Source: Microsoft.com #3 Intelligent files recommendations The plans are to introduce a new files view to OneDrive and the Office.com home page to recommend relevant files to a user, sometime later in 2018. The intelligence of Microsoft Graph will access how a user works, who the user works with, and activity on content shared with the user across Microsoft 365. This information while collaborating on content in OneDrive and SharePoint will be used to suggest files to the user. The Tap feature in Word 2016 and Outlook 2016 intelligently recommends content stored in OneDrive and SharePoint by accessing the context of what the user is working on. Source: Microsoft.com Making informed decisions has never been easier The innovative AI used in OneDrive and SharePoint helps users make informed decisions while working with content. Smart features like File insights, Intelligent sharing, and Data insights are here to provide you with stats and facts to make life easier. Let’s suppose you have an important meeting at hand. File Insights helps viewers with an  ‘Inside look’ i.e. an important information at a glance to prep for the meeting. Source: Microsoft.com Intelligent sharing helps employees share relevant content like documents and presentations with meeting attendees. Source: Microsoft.com Finally, Data Insights will use information provided by cognitive services to set up custom workflows to organize images, trigger notifications, or invoke more extensive business processes directly in OneDrive and SharePoint with deep integration to Microsoft Flow.   Source: microsoft.com   Security Enhancements AI-powered OneDrive and SharePoint will help in securing content and ward off malicious attacks. ‘OneDrive files restore’ integrated with ‘Windows Defender Antivirus’ protects users from ransomware attacks by identifying breaches and guides them through remediation and file recovery. Users will be able to leverage the text extracted from photos and audio/video transcriptions by applying ‘Native data loss prevention (DLP)’ policies to automatically protect content thereby adhering to Intelligent compliance. Many Fortune 500 customers have already started supporting Microsoft’s bold vision to improvise content collaboration and are moving their content to OneDrive and SharePoint. Take a look at the official page for detailed information on Microsoft 365’s smart new features. Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy Microsoft claims it halted Russian spearphishing cyberattacks Microsoft’s .NET Core 2.1 now powers Bing.com    
Read more
  • 0
  • 0
  • 13919

article-image-cstar-spotifys-cassandra-orchestration-tool-is-now-open-source
Melisha Dsouza
07 Sep 2018
4 min read
Save for later

cstar: Spotify’s Cassandra orchestration tool is now open source!

Melisha Dsouza
07 Sep 2018
4 min read
On the 4th of September 2018, Spotify labs announced that cstar- the Cassandra orchestration tool for the command line, will be made freely available to the public. In Cassandra, it is complicated to understand how to achieve the perfect performance, security, and data consistency. You need to run a specific set of shell commands on every node of a cluster, usually in some coordination to avoid the cluster being down. This task can be easy for small clusters, but can get tricky and time consuming for the big clusters. Imagine having to run those commands on all Cassandra nodes in the company! It would be time consuming and labor intensive. A scheduled upgrade of the entire Cassandra fleet at Spotify included a precise procedure that involved numerous steps. Since Spotify has clusters with hundreds of nodes, upgrading one node at a time is unrealistic. Upgrading all nodes at once also wasn't a probable option, since that would take down the whole cluster. In addition to the outlined performance problems, other complications while dealing with Cassandra involved: Temporary network failures, breaking SSH connections, among others Performance and availability can be affected if operations that are computation heavy or involve restarting the Cassandra process/node are not executed in a particular order Nodes can go down at any time, so the status of the cluster should be checked not just before running the task, but also before execution is started on a new node. This means there is no scope of parallelization. Spotify was in dire need of an efficient and robust method to counteract these performance issues on thousands of computers in a coordinated manner. Why was Ansible or Fabric not considered by Spotify? Ansible and Fabric are not topology-aware. They can be made to run commands in parallel on groups of machines. Some wrapper scripts and elbow grease, can help split a Cassandra cluster into multiple groups, and execute a script on all machines in one group in parallel. But on the downside, this solution doesn’t wait for Cassandra nodes to come back up before proceeding nor does it notice if random Cassandra nodes go down during execution. Enter cstar cstar  is based on paramiko-a Python (2.7, 3.4+) implementation of the SSHv2 protocol, and shares the same ssh/scp implementation that Fabric uses. Being a command line tool, it runs an arbitrary script on all hosts in a Cassandra cluster in “topology aware” fashion.     Example of cstar running on a 9 node cluster with replication factor of 3, with the assumption that the script brings down the Cassandra process. Notice how there are always 2 available replicas for each token range. Source: Spotify Labs cstar supports the following execution mechanisms: The script is run on exactly one node per data center at the time. If you have N data centers with M nodes each and replication factor of X, this effectively runs the script on M/X * N nodes at that time. The script run on all nodes at the same time, regardless of the topology. Installing cstar and running a command on a cluster is easy and can be done by following this quick example Source: Spotify Labs The concept of ‘Jobs’ Execution of a script on one or more clusters is a job. Job control in cstar works like in Unix shells. A user can pause running jobs and then resume them at a later point in time. It is also possible to configure cstar to pause a job after a certain number of nodes have completed. This helps users to: Run a cstar job on one node Manually validate if the job worked as expected Lastly, the user can resume the job. The features of Cstar has made it really easy for Spotify to work with Cassandra clusters. You can find more insights to this article on Spotify Labs. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge PrimeTek releases PrimeReact 2.0.0 Beta 3 version
Read more
  • 0
  • 0
  • 13900

article-image-oracle-releases-graphpipe-standardizes-machine-learning-model-deployment
Bhagyashree R
16 Aug 2018
3 min read
Save for later

Oracle releases GraphPipe: An open source tool that standardizes machine learning model deployment

Bhagyashree R
16 Aug 2018
3 min read
Oracle has released GraphPipe, an open source tool to simplify and standardize deployment of Machine Learning (ML) models easier. Development of ML models is difficult, but deploying the model for the customers to use is equally difficult. There are constant improvements in the development model but, people often don’t think about deployment. This is were GraphPipe comes into the picture! What are the key challenges GraphPipe aims to solve? No standard way to serve APIs: The lack of standard for model serving APIs limits you to work with whatever the framework gives you. Generally, business application will have an auto generated application just to talk to your deployed model. The deployment situation becomes more difficult when you are using multiple frameworks. You’ll have to write custom code to create ensembles of models from multiple frameworks. Building model server is complicated: Out-of-the-box solutions for deployment are very few because deployment gets less attention than training. Existing solution not efficient enough: Many of the currently used solutions don't focus on performance, so for certain use cases they fall short. Here’s how the current situation looks like: Source: GraphPipe’s User Guide How GraphPipe solves these problems? GraphPipe uses flatbuffers as the message format for a predict request. Flatbuffers are like google protocol buffers, with an added benefit of avoiding a memory copy during the deserialization step. A request message provided by the flatbuffer definition includes: Input tensors Input names Output names The request message is then accepted by the GraphPipe remote model and returns one tensor per requested output name, along with metadata about the types and shapes of the inputs and outputs it supports. Here’s how the deployment situation will look like with the use of GraphPipe: Source: GraphPipe’s User Guide What are the features it comes with? Provides a minimalist machine learning transport specification based on flatbuffers, which is an efficient cross platform serialization library for C++, C#, C, Go, Java, JavaScript, Lobster, Lua, TypeScript, PHP, and Python. Comes with simplified implementations of clients and servers that make deploying and querying machine learning models from any framework considerably effortless. It's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or Caffe2. Provides efficient client implementations in Go, Python, and Java. Includes guidelines for serving models consistently according to the flatbuffer definitions. You can read plenty of documentation and examples at https://oracle.github.io/graphpipe. The GraphPipe flatbuffer spec can be found on Oracle's GitHub along with servers that implement the spec for Python and Go. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Why Oracle is losing the Database Race
Read more
  • 0
  • 0
  • 13899

article-image-fae-fast-adaptation-engine-iolites-tool-write-smart-contracts-using-machine-translation
Savia Lobo
12 Mar 2018
2 min read
Save for later

FAE (Fast Adaptation Engine): iOlite's tool to write Smart Contracts using machine translation

Savia Lobo
12 Mar 2018
2 min read
iOlite Labs have developed a Google Translate clone known as the FAE (Fast Adaptation Engine). This new engine can quickly adapt to any known language as its input, and further outputs results in the user’s desired programming language. The iOlite labs team, at present, is focusing on facilitating the huge need for smart contract development through the programming language Solidity on the Ethereum blockchain. This means, iOlite is all set to dissolve the existing technical learning boundaries: Programmers can write smart contracts using their existing skills in programming languages such as Python, C, JavaScript, and so on. Non-programmers can also write smart contracts using natural languages such as English. Although this new engine is free to use, it encourages inter-collaboration of intermediate programmers and expert developers to benefit greatly in two ways. Firstly, auditing the writing process of an author’s smart contract, and secondly by developing/optimizing features. The developers will receive small fees in the form of iLT tokens each time they audit a smart contract or when the features that they have developed are used. This illuminates two of three actors in the ecosystem, regular users (either authors or customers) and contributors (developers/auditors). Currently, iOlite is focussed on smart contracts, which means entering via the intelligence market. However, there can be numerous applications that can include insurance underwriters, lawyers, financial services, businesses, automation, and so on. iOlite, as a collective macro-system is a knowledge generator, it inherently fosters the best features to win through market forces, making it an ideal model for finding truth. As this journey of iOlite advances, it aims to provide solutions for many more language-system problems, such as formal ones in mathematics, and maybe even bridging a gap between natural and formal definitions in fields like neuropsychology, and so on. Read more on this tool in detail along with real-world examples on iOlite’s whitepaper.
Read more
  • 0
  • 0
  • 13891

article-image-blackberry-is-acquiring-ai-cybersecurity-startup-cylance-to-expand-its-next-gen-endpoint-solutions-like-its-autonomous-cars-software
Savia Lobo
19 Nov 2018
2 min read
Save for later

Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software

Savia Lobo
19 Nov 2018
2 min read
On Friday, Blackberry announced its plans to acquire Cylance on Friday for $1.4 billion in cash to help expand its QNX unit, which makes software for next-generation autonomous cars. According to Blackberry, “Cylance will operate as a separate business unit within BlackBerry Limited”. This deal is expected to close by February 2019. Describing the Cylance acquisition, BlackBerry CEO John Chen said, “Cylance’s leadership in artificial intelligence and cybersecurity will immediately complement our entire portfolio, UEM, and QNX in particular. We are very excited to onboard their team and leverage our newly combined expertise. We believe adding Cylance’s capabilities to our trusted advantages in privacy, secure mobility, and embedded systems will make BlackBerry Spark indispensable to realizing the Enterprise of Things.” Technology from Cylance will be leveraged in critical areas of Blackberry’s Spark Platform. This Spark Platform is a next-generation secure chip-to-edge communications platform for the EoT (Enterprise of Things) that will create and leverage trusted connections between any endpoint. It enables organizations to comply with stringent multi-national regulatory requirements. Cylance’s CEO Stuart McClure, said, “Our highly skilled cybersecurity workforce and market leadership in next-generation endpoint solutions will be a perfect fit within BlackBerry where our customers, teams, and technologies will gain immediate benefits from BlackBerry’s global reach. We are eager to leverage BlackBerry’s mobility and security strengths to adapt our advanced AI technology to deliver a single platform.” To know more about this acquisition head over to the official press release. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever
Read more
  • 0
  • 0
  • 13888
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-if-ais-could-collaborate-using-human-like-values-deepmind-researchers-propose-a-hanabi-platform
Prasad Ramesh
06 Feb 2019
3 min read
Save for later

What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform.

Prasad Ramesh
06 Feb 2019
3 min read
A paper with inputs from 15 researchers talks about artificial intelligence systems playing a game called Hanabi in a paper titled The Hanabi Challenge: A New Frontier for AI Research. The researchers propose an experimental framework called the Hanabi Learning Environment for the AI research community to test out and advance algorithms. This can help in assessing the performance of the current state-of-the-art algorithms and techniques. What’s special about Hanabi? Hanabi is a two to five player game involving cards with numbers on them. In Hanabi, you are playing against other participants but you also need to trust the imperfect information they provide and make deductions to advance your cards. Hanabi has an imperfect nature as players cannot see their own cards. This game is a test of collaboratively sharing information with discretion. The rules are specified in detail in the research paper. Games have always been used to showcase or test the ability of artificial intelligence and machine learning, be it Go, Chess, Dota 2, or other games. So why would Hanabi be ‘A New Frontier for AI Research’? The difference is that Hanabi needs a bit of human touch to play. Factors like trust, imperfect information, and co-operation come into the picture, with this game which is why it is a good testing ground for AI applications. What’s the paper about? The idea is to test the collaboration of AI agents where the information is limited and only implicit communication is allowed. The researchers say that Hanabi increases reasoning of beliefs and intentions of other AI agents and makes them prominent. They believe that, developing techniques that instill agents with such theory will, in addition to succeeding at Hanabi, unlock ways how agents can collaborate efforts with human partners. The researchers have even introduced an open-source ‘Hanabi Learning Environment’, which is an experimental framework for other researchers to assess their techniques in the environment. To play Hanabi, the theory of mind is necessary, which revolves around human-like traits such as beliefs, intents, desires, etc The human approach of the theory of mind reasoning is important not just in how humans approach this game. It is also about how humans handle communication and interactions when multiple parties are involved.. Results and further work State-of-the-art reinforcement learning algorithms using deep learning are evaluated in the paper. In self-play, they fall short of the hand-coded Hanabi playing bots. In case of collaborative play, they do not collaborate at all. This shows that there is a lot of room for advances in this area related to theory of mind. The code for the Hanabi Learning Environment is being written in Python and C++ and will be available on DeepMind’s GitHub. Its interface is similar to OpenAI Gym. For more details about the game and how the theory will help in testing AI agent interactions, check out the research paper. Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn Technical and hidden debts in machine learning – Google engineers’ give their perspective The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence
Read more
  • 0
  • 0
  • 13884

article-image-jeff-weiner-talks-about-technology-implications-on-society-unconscious-bias-and-skill-gaps-wired-25
Sugandha Lahoti
15 Oct 2018
4 min read
Save for later

Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25

Sugandha Lahoti
15 Oct 2018
4 min read
Last Saturday, Wired interviewed Jeff Weiner, CEO Linkedin, as a part of their 25th Anniversary celebration. He talked about the implications of technology on the modern society saying that technology amplifies tribalism. He also talked about how Linkedin keeps a tab on unconscious bias and why Americans need to develop soft skills to succeed in the coming years. Technology accentuates tribalism When asked about the implications of technology on society, Weiner said, “I think increasingly, we need to proactively ask ourselves far more difficult, challenging questions—provocative questions—about the potential unintended consequences of these technologies. And to the best of our ability, try to understand the implications for society.” This statement is justified as every week there's a top story about some company going wrong in some direction. We’re talking about the shutting down of Google+, Facebook’s security breach compromising 50M accounts, etc. He further talked about technology dramatically accelerating and reinforcing tribalism at a time when increasingly we need to be coming together as a society. He says, that one of the most important challenges for tech in the next 25 years is to  “understand the impact of technology as proactively as possible. And trying to create as much value, and trying to bring people together to the best of our ability.” Unconscious bias on Linkedin He also talked about unconscious bias as an unintended consequence of LinkedIn’s algorithms and initiatives. “It shouldn't happen that Linkedin reinforces the growing socioeconomic chasms on a global basis, especially here in the United States, by providing more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks.” Read more: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 He elaborated on how LinkedIn is addressing this unconscious bias. Linkedin’s Career Advice Hub was developed with the goal of creating economic opportunity for every member of the global workforce, last year as a response to the unconscious bias that crept into its ‘Ask For a Referral’ program. The Career Advice Hub enables any member of LinkedIn to ask for help, and for any member of LinkedIn to volunteer to help them, and to mentor them. They are also going to create economic opportunities for frontline workers, middle-skilled workers, and blue collar workers. Another focus is on knowledge workers, “who don't necessarily have the right networks or the right degrees.” Soft skills: The biggest skill gap in the U.S. Jeff also said that the biggest skills gap the United States is not coding skill but soft skills. This includes written communication, oral communication, team building, people leadership, collaboration. “For jobs like sales, sales development, business development, customer service, this is the biggest gap, and it's counter-intuitive.” Read more: 96% of developers believe developing soft skills is important Soft skills every data scientist should teach their child Soft skills are necessary because AI is still away from being able to replicate and replace human interaction and human touch. “So there's an incentive for people to develop these skills because those jobs are going to be more stable for a longer period of time.” Before you start thinking about becoming an AI scientist, you need to know how to send email, how to work a spreadsheet, how to do word processing. Jeff says, “Believe it or not, there are broad swaths of the population and the workforce that don't have those skills. And it turns out if you don't have these foundational skills if you're in a position where you need to re-skill for a more advanced technology, it becomes almost prohibitively complex to learn multiple skills at the same time.” Read the full interview on Wired. The ethical dilemmas developers working on Artificial Intelligence products must consider Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”
Read more
  • 0
  • 0
  • 13873

article-image-researchers-unveil-a-new-algorithm-that-allows-analyzing-high-dimensional-data-sets-more-effectively-at-neurips-conference
Prasad Ramesh
10 Dec 2018
3 min read
Save for later

Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference

Prasad Ramesh
10 Dec 2018
3 min read
Researchers from Rochester Institute of Technology published a paper which describes a method to maintain speed and accuracy when dealing with high-dimensional data sets. What is the paper about? This paper titled Sparse Covariance Modeling in High Dimensions with Gaussian Processes studies the statistical relationships among components of high-dimensional observations. The researchers propose to model the changing covariances of observation elements as sparse multivariate stochastic processes. Particularly their novel covariance modeling method used reduces dimensionality. It does so by relating the observation vectors to a subspace with lower dimensions. The changing correlations are characterized by jointly modeling the latent factors and factor loadings as collections of basis functions. They vary with the covariates as Gaussian processes. The basis sparsity is encoded by automatic relevance determination (ARD) through the coefficients to account for inherent redundancy. The experiments conducted across various domains using this method show superior performances to the best current methods. What modeling methods are used? In many AI applications, there are complex relationships among different components of high-dimensional data sets. These relationships can change across non-random covariates, say, an experimental condition. Two examples listed in the paper which were also used in the experiments to test the method are as follows: In a computational gene regulatory network (GRN) interface, the topological structures of GRNs are context dependent. The interactions of gene activities will be different in different conditions like temperature, pH etc,. In a data set displaying crime occurrences, correlations are seen in spatially disjoint spaces but the spatial correlations occur over a period of time. In such cases, the modeling methods used typically combine heterogeneous data taken from different experimental conditions or sometimes in a single data set. The researchers have proposed a novel covariance modeling method that allows cov(y|x) = Σ(x) to change flexibly with X. One of the authors, Rui Li, stated: “This research is motivated by the increasing prevalence of high-dimensional data sets and the computational capacity to analyze and model their volatility and co-volatility varying over some covariates. The study proposed a methodology to scale to high dimensional observations by reducing the dimensions while preserving the latent information; it allows sharing information in the latent basis across covariates.” The results were better as compared to other methods in different experiments. It is robust in the choice of hyperparameters and produces a lower root mean square error (RMSE). This paper was presented at NeurIPS 2018, you can read it here. How NeurIPS 2018 is taking on its diversity and inclusion challenges Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?
Read more
  • 0
  • 0
  • 13869

article-image-facebook-content-moderators-work-in-filthy-stressful-conditions-and-experience-emotional-trauma-daily-reports-the-verge
Fatema Patrawala
20 Jun 2019
5 min read
Save for later

Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge

Fatema Patrawala
20 Jun 2019
5 min read
Yesterday, The Verge published a gut-wrenching investigative report about the terrible working conditions of Facebook moderators at one of its contract vendor sites in North America - Tampa. Facebook’s content moderation site in Tampa, Florida, is operated by the professional services firm Cognizant. It is one of the lowest-performing sites in North America and has never consistently enforced Facebook’s policies with 98 percent accuracy, as per Cognizant’s contract. In February, The Verge had published a similar report on the deplorable working conditions of content moderators who work at Facebook’s Arizona site. Both the reports are investigated and written by an acclaimed tech reporter, Casey Newton. But yesterday’s article is based on the investigation performed by interviewing 12 current and former moderators and managers at the Tampa site. In most cases, pseudonyms are used to protect employees from potential retaliation from Facebook and Cognizant. But for the first time, three former moderators for Facebook agreed to break their nondisclosure agreements and discuss working conditions at the site on the record. https://twitter.com/CaseyNewton/status/1141317045881069569 The working conditions for the content moderators are filled with filth and stress. To an extent that one of them is being reported dead due to such an emotional trauma that the moderators go through everyday. Keith Utley, was a lieutenant commander in the military and after his retirement he chose to work as Facebook moderator at the Tampa site. https://twitter.com/CaseyNewton/status/1141316396942602240 Keith worked the overnight shift and he moderated the worst stuff posted by users on daily basis on Facebook including the the hate speech, the murders, the child pornography. Utley had a heart attack at his desk and died last year. Senior management initially discouraged employees from discussing the incident and tried hiding the fact that Keith died, for fear it would hurt productivity. But Keith’s father visited the site to collect his belongings and broke emotionally and said, “My son died here”. The moderators further mention that the Tampa site has only one bathroom for all the 800 employees working at the site. And repeatedly the bathroom is found smeared with feces and menstrual blood. The office coordinators did not even care about cleaning the site and it was infested with bed bugs. Workers also found fingernails and pubic hair on their desk. “Bed bugs can be found virtually every place people tend to gather, including the workplace,” Cognizant said in a statement. “No associate at this facility has formally asked the company to treat an infestation in their home. If someone did make such a request, management would work with them to find a solution.” There are instances of sexual harassment as well at the work place and workers have filed two such cases since April. They are now before the US Equal Employment Opportunity Commission. Often there are cases of physical and verbal fights in the office and instances of things stolen from the office premises was common. One of the former moderators bluntly said to The Verge reporter that if anything needs change. It is only one thing that Facebook needs to shut down. https://twitter.com/jephjacques/status/1141330025897168897 There are many significant voices added to the shout of breaking Facebook, one of them includes Elizabeth Warren, US Presidential candidate for 2020, who wants to break the big tech. Another one comes from Chris Hughes, one of the founders of Facebook who published an op-ed on why he thinks it's time to break Facebook. In response to this investigation, Facebook spokesperson, Chris Harrison says they will conduct an audit of its partner sites and make other changes to promote the well-being of its contractors. He said the company would consider making more moderators full-time employees in the future, and hope to provide counseling for moderators after they leave. This news garnered public anger and rage towards Facebook, people have commented that Facebook defecates on humanity and profits enormously while getting away with it easily. https://twitter.com/pemullen/status/1141357359861645318 Another one reads that Facebook’s mission of connecting the world has been an abject failure and the world is worse off from being connected in the ways Facebook has done it. Additionally there are comments on how this story comes in as a reminder of how little these big tech firms care about people. https://twitter.com/stautistic/status/1141424512736485376   Author of the book Antisocial Media and a columnist at Guardian, Siva Vaidhyanathan, applauds Casey Newton, The Verge reporter for bringing up this story. But he also mentions that Casey has ignored the work of Sarah T. Roberts who had written an entire book on this topic called, Behind the Screens. https://twitter.com/sivavaid/status/1141330295376863234 Check out the full story covered by The Verge on their official blog post. Facebook releases Pythia, a deep learning framework for vision and language multimodal research After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results
Read more
  • 0
  • 0
  • 13862
article-image-github-now-supports-the-gnu-general-public-license-gpl-cooperation-commitment-as-a-way-of-promoting-effective-software-regulation
Savia Lobo
08 Nov 2018
3 min read
Save for later

GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation

Savia Lobo
08 Nov 2018
3 min read
Yesterday, GitHub announced that it now supports the GPL Cooperation Commitment with 40 other software companies because it aligns with GitHub’s core values. According to the GitHub post, by supporting this change, GitHub “hopes that this commitment will improve fairness and certainty for users of key projects that the developer ecosystem relies on, including Git and the Linux kernel. More broadly, the GPL Cooperation Commitment provides an example of evolving software regulation to better align with social goals, which is urgently needed as developers and policymakers grapple with the opportunities and risks of the software revolution.” An effective regulation has an enforcement mechanism that encourages compliance. Here, the most severe penalties for non-compliance, such as shutting down a line of business, would be reserved for repeat and intentional violators. The other less serious failures to comply, or accidental non-compliance may only result in warnings following which the violation should be promptly corrected. GPL as a private software regulation The GNU General Public License (GPL) is a tool for a private regulator (copyright holder) to achieve a social goal. The goal can be explained as, “under the license, anyone who receives a covered program has the freedom to run, modify, and share that program.” However, if a developer wishes to regulate, the GPL version 2 has a bug from the perspective of an effective regulator. Due to this bug, “non-compliance results in termination of the license, with no provision for reinstatement. This further makes the license marginally more useful to copyright ‘trolls’ who want to force companies to pay rather than come into compliance.” The bug is fixed in the GPL version 3 by introducing a “cure provision” under which a violator can usually have their license reinstated—if the violation is promptly corrected. Git and the other developer communities including Linux kernel and others use GPLv2 since 1991; many of which are unlikely to ever switch to GPLv3, as this would require agreement from all copyright holders, and not everyone agrees with all of GPLv3’s changes. However, GPLv3’s cure provision is uncontroversial and can be backported to the extent GPLv2 copyright holders agree. This is how GPL Cooperation Commitment helps The GPL Cooperation Commitment is a way for a copyright holder to agree to extend GPLv3’s cure provision to all GPLv2 (also LGPLv2 and LGPLv2.1, which have the same bug) licenses offered. This allows violators a fair chance to come into compliance and have their licenses reinstated. This commitment also incorporates one of several principles (the others do not relate directly to license terms) for enforcing compliance with the GPL and other copyleft licenses as effective private regulation. To know more about GitHub’s support to the GPL Cooperation Commitment, visit its official blog post. GitHub now allows issue transfer between repositories; a public beta version GitHub updates developers and policymakers on EU copyright Directive at Brussels The LLVM project is ditching SVN for GitHub. The migration to Github has begun
Read more
  • 0
  • 0
  • 13848

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 13848

article-image-googles-sidewalk-lab-smart-city-project-threatens-privacy-and-human-rights-amnesty-intl-ca-says
Fatema Patrawala
30 Apr 2019
6 min read
Save for later

Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says

Fatema Patrawala
30 Apr 2019
6 min read
Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is proposing a high-tech neighbourhood called Quayside for the city’s eastern waterfront. In March 2017, Waterfront Toronto had shared a Request for proposal for this project with the Sidewalk Labs team. It ultimately got approval by Oct 2017 and is currently led by Eric Schmidt, Alphabet Inc CEO and Daniel Doctoroff, Sidewalk Labs CEO. As per reports from Daneilla Barreto, a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights. https://twitter.com/AmnestyNow/status/1122932137513164801 The 12-acre smart city, which will be located between East Bayfront and the Port Lands, promises to tackle the social and policy challenges affecting Toronto: affordable housing, traffic congestion and the impacts of climate change. Imagine self-driving vehicles shuttling you around a 24/7 neighbourhood featuring low-cost, modular buildings that easily switch uses based on market demand. Picture buildings heated or cooled by a thermal grid that doesn’t rely on fossil fuels, or garbage collection by industrial robots. Underpinning all of this is a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. That last part about tracking human data has sparked concerns. Much ink has been spilled in the press about privacy protections and the issue has been raised repeatedly by citizens in two of four recent community consultations held by Sidewalk Toronto. They have proposed to build the waterfront neighbourhood from scratch, embed sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy, such as not selling personal information to third parties or using it for advertising purposes. Daneilla further argues that privacy was declared a human right and is protected under the Universal Declaration of Human Rights adopted by the United Nations in 1948. However, in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. Not all communities have agreed to participate in this initiative as marginalized and racialized communities will be affected most by surveillance. The Canadian Civil Liberties Association (CCLA) has threatened to sue Sidewalk Toronto project, arguing that privacy protections should be spelled out before the project proceeds. Toronto’s Mayor John Tory showed least amount of interest in addressing these concerns during a panel on tech investment in Canada at South by Southwest (SXSW) on March 10. Tory was present in the event to promote the city as a go-to tech hub while inviting the international audience at SXSW at the other industry events. Last October, Saadia Muzaffar announced her resignation from Waterfront Toronto's Digital Strategy Advisory Panel. "Waterfront Toronto's apathy and utter lack of leadership regarding shaky public trust and social license has been astounding," the author and founder of TechGirls Canada said in her resignation letter. Later that month, Dr. Ann Cavoukian, a privacy expert and consultant for Sidewalk Labs, put her resignation too. As she wanted all data collection to be anonymized or "de-identified" at the source, protecting the privacy of citizens. Why big tech really want your data? Data can be termed as a rich resource or the “new oil” in other words. As it can be mined in a number of ways, from licensing it for commercial purposes to making it open to the public and freely shareable.  Apparently like oil, data has the power to create class warfare, permitting those who own it to control the agenda and those who don’t to be left at their mercy. With the flow of data now contributing more to world GDP than the flow of physical goods, there’s a lot at stake for the different players. It can benefit in different ways as for the corporate, it is the primary beneficiaries of personal data, monetizing it through advertising, marketing and sales. For example, Facebook for past 2 to 3 years has repeatedly come under the radar for violating user privacy and mishandling data. For the government, data may help in public good, to improve quality of life for citizens via data--driven design and policies. But in some cases minorities and poor are highly impacted by the privacy harms caused due to mass surveillance, discriminatory algorithms among other data driven technological applications. Also public and private dissent can be discouraged via mass surveillance thus curtailing freedom of speech and expression. As per NY Times report, low-income Americans have experienced a long history of disproportionate surveillance, the poor bear the burden of both ends of the spectrum of privacy harms; are subject to greater suspicion and monitoring while applying for government benefits and live in heavily policed neighborhoods. In some cases they also lose out on education and job opportunities. https://twitter.com/JulieSBrill/status/1122954958544916480 In more promising news, today the Oakland Privacy Advisory Commission released 2 key documents one on the Oakland privacy principles and the other on ban on facial recognition tech. https://twitter.com/cfarivar/status/1123081921498636288 They have given emphasis to privacy in the framework and mentioned that, “Privacy is a fundamental human right, a California state right, and instrumental to Oaklanders’ safety, health, security, and access to city services. We seek to safeguard the privacy of every Oakland resident in order to promote fairness and protect civil liberties across all of Oakland’s diverse communities.” Safety will be paramount for smart city initiatives, such as Sidewalk Toronto. But we need more Oakland like laws and policies that protect and support privacy and human rights. One where we are able to use technology in a safe way and things aren’t happening that we didn’t consent to. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 13846
article-image-mapzen-an-open-source-mapping-platform-joins-the-linux-foundation-project
Amrata Joshi
29 Jan 2019
3 min read
Save for later

Mapzen, an open-source mapping platform, joins the Linux Foundation project

Amrata Joshi
29 Jan 2019
3 min read
Yesterday, the Linux Foundation announced that Mapzen, an open-source mapping platform is now a part of the Linux Foundation project. Mapzen focuses on the core components of map display such as search and navigation. It provides developers with an open software and data sets that are easy to access. It was launched in 2013 by mapping industry veterans in combination with urban planners, architects, movie makers, and video game developers. Randy Meech, former CEO of Mapzen and current CEO of StreetCred Labs, said, “Mapzen is excited to join the Linux Foundation and continue our open, collaborative approach to mapping software and data. Shared technology can be amazingly powerful, but also complex and challenging. The Linux Foundation knows how to enable collaboration across organizations and has a great reputation for hosting active, thriving software and data projects.” Mapzen’s open resources and projects are used to create applications or integrate them into other products and platforms. As Mapzen’s resources are all open source, developers can easily build platforms without the restrictions of data sets by other commercial providers. Mapzen is used by organizations such as Snapchat, Foursquare, Mapbox, Eventbrite, The World Bank, HERE Technologies, and Mapillary. With Mapzen, it is possible to take open data and build maps with search and routing services, upgrade their own libraries and process data in real-time. This is not possible with conventional mapping or geotracking services. Simon Willison, Engineering Director at Eventbrite said, “We’ve been using Who’s On First to help power Eventbrite’s new event discovery features since 2017. The gazetteer offers a unique geographical approach which allows us to innovate extensively with how our platform thinks about events and their locations. Mapzen is an amazing project and we’re delighted to see it joining The Linux Foundation.” Mapzen is operated in the cloud and on-premise by a wide range of organizations, including, Tangram, Valhalla and Pelias. Earlier this month, Hyundai joined Automotive Grade Linux (AGL) and the Linux Foundation for innovation through open source. It was a cross-industry effort for bringing automakers, suppliers and technology companies together to accelerate the development and adoption of an open software stack. Last year, Uber announced that it is joining the Linux Foundation as a Gold Member with an aim to support the open source community. Jim Zemlin, executive director at the Linux Foundation said, “Mapzen’s open approach to software and data has allowed developers and businesses to create innovative location-based applications that have changed our lives. We look forward to extending  Mapzen’s impact even further around the globe in areas like transportation and traffic management, entertainment, photography and more to create new value for companies and consumers.” According to the official press release, the Linux Foundation will align resources to advance Mapzen’s mission and it will further grow its ecosystem of users and developers. Newer Apple maps is greener and has more details Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool
Read more
  • 0
  • 0
  • 13841

article-image-microsoft-launches-quantum-katas-to-learn-quantum-language
Sugandha Lahoti
30 Jul 2018
2 min read
Save for later

Microsoft launches Quantum Katas, a programming project to learn Q#, its Quantum programming language

Sugandha Lahoti
30 Jul 2018
2 min read
Microsoft has announced Quantum Katas, a new portal for learning the quantum programming language Q#. This project contains a self-paced set of programming tutorials to teach interested developers the basic elements of Quantum computers as well as their Quantum programming language. Microsoft has been one of the forerunners in the Quantum computing race. Last year, Microsoft announced Q#, as a domain-specific programming language used for expressing quantum algorithms. Quantum Katas, as the name implies, has been derived from the popular programming technique Code Katas, which means an exercise to develop your skills through practice and repetition. Per Microsoft, each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task is based on code-filling; they may vary from one line at the start to sizable code fragments as the tutorial progresses. Developers are also provided reference materials to solve the tasks, both on quantum computing and on Q#. A testing framework is provided to validate solutions, thereby providing real-time feedback. Each kata covers one topic. The current topics are: Basic quantum computing gates. These tasks focus on the main single-qubit and multi-qubit gates used in quantum computing. Superposition. In these tasks, you learn how to prepare a certain superposition state on one or multiple qubits. Measurements. These tasks teach you to distinguish quantum states using measurements. Deutsch–Jozsa algorithm. In these tasks, you learn how to write quantum oracles which implement classical functions, and the Bernstein–Vazirani, and Deutsch–Jozsa algorithms. To use these katas, you need to install the Quantum Development Kit for Windows 10, MacOS or Linux.  The kit includes all of the pieces a developer needs to get started including the Q# programming language and compiler, a Q# library, a local quantum computing simulator, a quantum trace simulator and a Visual Studio extension. Microsoft Quantum Katas was developed after the results of the Q# coding contest that took place earlier this month, challenging more than 650 developers to solve Quantum related questions. You can read more about Quantum Katas on GitHub. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#?
Read more
  • 0
  • 0
  • 13829
Modal Close icon
Modal Close icon