Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-researchers-unveil-a-new-algorithm-that-allows-analyzing-high-dimensional-data-sets-more-effectively-at-neurips-conference
Prasad Ramesh
10 Dec 2018
3 min read
Save for later

Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference

Prasad Ramesh
10 Dec 2018
3 min read
Researchers from Rochester Institute of Technology published a paper which describes a method to maintain speed and accuracy when dealing with high-dimensional data sets. What is the paper about? This paper titled Sparse Covariance Modeling in High Dimensions with Gaussian Processes studies the statistical relationships among components of high-dimensional observations. The researchers propose to model the changing covariances of observation elements as sparse multivariate stochastic processes. Particularly their novel covariance modeling method used reduces dimensionality. It does so by relating the observation vectors to a subspace with lower dimensions. The changing correlations are characterized by jointly modeling the latent factors and factor loadings as collections of basis functions. They vary with the covariates as Gaussian processes. The basis sparsity is encoded by automatic relevance determination (ARD) through the coefficients to account for inherent redundancy. The experiments conducted across various domains using this method show superior performances to the best current methods. What modeling methods are used? In many AI applications, there are complex relationships among different components of high-dimensional data sets. These relationships can change across non-random covariates, say, an experimental condition. Two examples listed in the paper which were also used in the experiments to test the method are as follows: In a computational gene regulatory network (GRN) interface, the topological structures of GRNs are context dependent. The interactions of gene activities will be different in different conditions like temperature, pH etc,. In a data set displaying crime occurrences, correlations are seen in spatially disjoint spaces but the spatial correlations occur over a period of time. In such cases, the modeling methods used typically combine heterogeneous data taken from different experimental conditions or sometimes in a single data set. The researchers have proposed a novel covariance modeling method that allows cov(y|x) = Σ(x) to change flexibly with X. One of the authors, Rui Li, stated: “This research is motivated by the increasing prevalence of high-dimensional data sets and the computational capacity to analyze and model their volatility and co-volatility varying over some covariates. The study proposed a methodology to scale to high dimensional observations by reducing the dimensions while preserving the latent information; it allows sharing information in the latent basis across covariates.” The results were better as compared to other methods in different experiments. It is robust in the choice of hyperparameters and produces a lower root mean square error (RMSE). This paper was presented at NeurIPS 2018, you can read it here. How NeurIPS 2018 is taking on its diversity and inclusion challenges Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?
Read more
  • 0
  • 0
  • 13869

article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 13866

article-image-facebook-content-moderators-work-in-filthy-stressful-conditions-and-experience-emotional-trauma-daily-reports-the-verge
Fatema Patrawala
20 Jun 2019
5 min read
Save for later

Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge

Fatema Patrawala
20 Jun 2019
5 min read
Yesterday, The Verge published a gut-wrenching investigative report about the terrible working conditions of Facebook moderators at one of its contract vendor sites in North America - Tampa. Facebook’s content moderation site in Tampa, Florida, is operated by the professional services firm Cognizant. It is one of the lowest-performing sites in North America and has never consistently enforced Facebook’s policies with 98 percent accuracy, as per Cognizant’s contract. In February, The Verge had published a similar report on the deplorable working conditions of content moderators who work at Facebook’s Arizona site. Both the reports are investigated and written by an acclaimed tech reporter, Casey Newton. But yesterday’s article is based on the investigation performed by interviewing 12 current and former moderators and managers at the Tampa site. In most cases, pseudonyms are used to protect employees from potential retaliation from Facebook and Cognizant. But for the first time, three former moderators for Facebook agreed to break their nondisclosure agreements and discuss working conditions at the site on the record. https://twitter.com/CaseyNewton/status/1141317045881069569 The working conditions for the content moderators are filled with filth and stress. To an extent that one of them is being reported dead due to such an emotional trauma that the moderators go through everyday. Keith Utley, was a lieutenant commander in the military and after his retirement he chose to work as Facebook moderator at the Tampa site. https://twitter.com/CaseyNewton/status/1141316396942602240 Keith worked the overnight shift and he moderated the worst stuff posted by users on daily basis on Facebook including the the hate speech, the murders, the child pornography. Utley had a heart attack at his desk and died last year. Senior management initially discouraged employees from discussing the incident and tried hiding the fact that Keith died, for fear it would hurt productivity. But Keith’s father visited the site to collect his belongings and broke emotionally and said, “My son died here”. The moderators further mention that the Tampa site has only one bathroom for all the 800 employees working at the site. And repeatedly the bathroom is found smeared with feces and menstrual blood. The office coordinators did not even care about cleaning the site and it was infested with bed bugs. Workers also found fingernails and pubic hair on their desk. “Bed bugs can be found virtually every place people tend to gather, including the workplace,” Cognizant said in a statement. “No associate at this facility has formally asked the company to treat an infestation in their home. If someone did make such a request, management would work with them to find a solution.” There are instances of sexual harassment as well at the work place and workers have filed two such cases since April. They are now before the US Equal Employment Opportunity Commission. Often there are cases of physical and verbal fights in the office and instances of things stolen from the office premises was common. One of the former moderators bluntly said to The Verge reporter that if anything needs change. It is only one thing that Facebook needs to shut down. https://twitter.com/jephjacques/status/1141330025897168897 There are many significant voices added to the shout of breaking Facebook, one of them includes Elizabeth Warren, US Presidential candidate for 2020, who wants to break the big tech. Another one comes from Chris Hughes, one of the founders of Facebook who published an op-ed on why he thinks it's time to break Facebook. In response to this investigation, Facebook spokesperson, Chris Harrison says they will conduct an audit of its partner sites and make other changes to promote the well-being of its contractors. He said the company would consider making more moderators full-time employees in the future, and hope to provide counseling for moderators after they leave. This news garnered public anger and rage towards Facebook, people have commented that Facebook defecates on humanity and profits enormously while getting away with it easily. https://twitter.com/pemullen/status/1141357359861645318 Another one reads that Facebook’s mission of connecting the world has been an abject failure and the world is worse off from being connected in the ways Facebook has done it. Additionally there are comments on how this story comes in as a reminder of how little these big tech firms care about people. https://twitter.com/stautistic/status/1141424512736485376   Author of the book Antisocial Media and a columnist at Guardian, Siva Vaidhyanathan, applauds Casey Newton, The Verge reporter for bringing up this story. But he also mentions that Casey has ignored the work of Sarah T. Roberts who had written an entire book on this topic called, Behind the Screens. https://twitter.com/sivavaid/status/1141330295376863234 Check out the full story covered by The Verge on their official blog post. Facebook releases Pythia, a deep learning framework for vision and language multimodal research After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results
Read more
  • 0
  • 0
  • 13862

article-image-microsofts-bing-back-to-normal-in-china
Savia Lobo
25 Jan 2019
2 min read
Save for later

Microsoft’s Bing ‘back to normal’ in China

Savia Lobo
25 Jan 2019
2 min read
On Wednesday, Microsoft announced of its search engine, Bing, being blocked in China. However, they were unsure if it was due to China’s great wall censorship or due to a technical glitch. However, the search engine is back online after being shut down for two consecutive days. The site may have been blocked by government censors. Many users also posted on Weibo, one of the popular social networks in China, commenting that “Bing is back” and “Bing returns to normal.” ZDNet also pointed out a notable fact that, “The temporary block of Microsoft's Bing comes at a time when tensions between the US and China are running high, with the introduction of a bipartisan Bill in the US earlier this month to ban the sale of tech to Chinese companies Huawei and ZTE, and the US stating on Wednesday its intention to extradite Huawei CFO Meng Wanzhou.” Though Bing is not widely used in China, it has been one of the few remaining portals to the broader internet as the Chinese government isolates China’s internet from the rest of the world. Bing remains the only US-based search engine because “Microsoft has worked to follow the government’s censorship practices around political topics”, the New York Times reported. In an interview with Fox Business Network at the World Economic Forum in Davos, Switzerland, Microsoft’s president, Brad Smith, said “There are times when there are disagreements, there are times when there are difficult negotiations with the Chinese government, and we’re still waiting to find out what this situation is about.” What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 Packt helped raise almost $1 million for charity with Humble Bundle in 2018 Sweden is at a crossroads with its nearly cashless society: To Swish or e-krona?
Read more
  • 0
  • 0
  • 13857

article-image-racket-7-2-a-descendent-of-scheme-and-lisp-is-now-out
Bhagyashree R
01 Feb 2019
2 min read
Save for later

Racket 7.2, a descendent of Scheme and Lisp, is now out!

Bhagyashree R
01 Feb 2019
2 min read
On Wednesday, the team behind Racket released Racket 7.2. Racket is a general-purpose, multi-paradigm programming language based on Scheme and Lisp that emphasizes on functional programming. Racket’s core is built on a lot of C code, which affects its portability to different systems, maintenance, and performance. Hence, back in 2017, the team decided to make the Racket distribution run on Chez Scheme. Racket on Chez Scheme (Racket CS) implementation has reached the almost complete status with all functionalities in place. Sharing the status of Racket CS, the blog post reads, “DrRacket CS works fully, the main Racket CS distribution can build itself, and 99.95% of the core Racket test suite passes”. Though the code runs fine, still some work needs to be done to ensure end-to-end performance to make Racket CS the default implementation of Racket. The following updates apply to both the implementations of Racket: Contract system:  The contract system, which guards one part of a program from another, now supports collapsible contracts. This will prevent repeated wrappers in certain pathological situations. Quickscript: Quickscript is a tool for DrRacket which allows you to quickly and easily extend DrRacket features. This scripting tool now comes bundled with the standard distribution. Web server configuration: The built-in configuration used for serving static files is updated to recognize the “.mjs” extension for JavaScript modules. The data/enumerate library: The library now supports an additional form of subtraction via but-not/e. The racklog library: A number of improvements are done such as fixing the logic variable binding, logic variables containing predicates being applicable, and the introduction of an %andmap higher-order predicate. Read the official announcement at Racket’s website. Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others Pharo 7.0 released with 64-bit support, a new build process and more PayPal replaces Flow with TypeScript as their type checker for every new web app
Read more
  • 0
  • 0
  • 13852

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 13848
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-github-now-supports-the-gnu-general-public-license-gpl-cooperation-commitment-as-a-way-of-promoting-effective-software-regulation
Savia Lobo
08 Nov 2018
3 min read
Save for later

GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation

Savia Lobo
08 Nov 2018
3 min read
Yesterday, GitHub announced that it now supports the GPL Cooperation Commitment with 40 other software companies because it aligns with GitHub’s core values. According to the GitHub post, by supporting this change, GitHub “hopes that this commitment will improve fairness and certainty for users of key projects that the developer ecosystem relies on, including Git and the Linux kernel. More broadly, the GPL Cooperation Commitment provides an example of evolving software regulation to better align with social goals, which is urgently needed as developers and policymakers grapple with the opportunities and risks of the software revolution.” An effective regulation has an enforcement mechanism that encourages compliance. Here, the most severe penalties for non-compliance, such as shutting down a line of business, would be reserved for repeat and intentional violators. The other less serious failures to comply, or accidental non-compliance may only result in warnings following which the violation should be promptly corrected. GPL as a private software regulation The GNU General Public License (GPL) is a tool for a private regulator (copyright holder) to achieve a social goal. The goal can be explained as, “under the license, anyone who receives a covered program has the freedom to run, modify, and share that program.” However, if a developer wishes to regulate, the GPL version 2 has a bug from the perspective of an effective regulator. Due to this bug, “non-compliance results in termination of the license, with no provision for reinstatement. This further makes the license marginally more useful to copyright ‘trolls’ who want to force companies to pay rather than come into compliance.” The bug is fixed in the GPL version 3 by introducing a “cure provision” under which a violator can usually have their license reinstated—if the violation is promptly corrected. Git and the other developer communities including Linux kernel and others use GPLv2 since 1991; many of which are unlikely to ever switch to GPLv3, as this would require agreement from all copyright holders, and not everyone agrees with all of GPLv3’s changes. However, GPLv3’s cure provision is uncontroversial and can be backported to the extent GPLv2 copyright holders agree. This is how GPL Cooperation Commitment helps The GPL Cooperation Commitment is a way for a copyright holder to agree to extend GPLv3’s cure provision to all GPLv2 (also LGPLv2 and LGPLv2.1, which have the same bug) licenses offered. This allows violators a fair chance to come into compliance and have their licenses reinstated. This commitment also incorporates one of several principles (the others do not relate directly to license terms) for enforcing compliance with the GPL and other copyleft licenses as effective private regulation. To know more about GitHub’s support to the GPL Cooperation Commitment, visit its official blog post. GitHub now allows issue transfer between repositories; a public beta version GitHub updates developers and policymakers on EU copyright Directive at Brussels The LLVM project is ditching SVN for GitHub. The migration to Github has begun
Read more
  • 0
  • 0
  • 13848

article-image-amazon-shareholders-reject-proposals-to-ban-sale-of-facial-recognition-tech-to-govt-and-to-conduct-independent-review-of-its-human-and-civil-rights-impact
Fatema Patrawala
23 May 2019
3 min read
Save for later

Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact

Fatema Patrawala
23 May 2019
3 min read
According to reports from Reuters, Amazon shareholders on Wednesday rejected the proposal on ban of selling its facial recognition tech to governments. The shareholders also rejected other proposals like climate change policy, salary transparency, and other equity issues. Amazon’s annual proxy statement included 11 resolutions, and it has been reported that all 11 resolutions were rejected by shareholders. This year in January, activist shareholders proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. The technology was found to be biased and inaccurate and is regarded as an enabler of racial discrimination of minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far, and Amazon has also pitched it to Immigration and Customs Enforcement. The first proposal asked the Board of Directors to stop sales of “Rekognition” — Amazon’s face surveillance technology — to the government. The second demands an independent review of its human and civil rights impacts, particularly for people of color, immigrants, and activists, who have always been disproportionately impacted by surveillance. The resolutions failed despite an effort by the ACLU and other civil rights groups to back the measures. The civil liberties group on Tuesday wrote an open letter to the tech giant of being “non-responsive” to privacy concerns. https://twitter.com/Matt_Cagle/status/1130586385595789312 Shankar Narayan, from ACLU Washington, made strong remarks on the vote, “The fact that there needed to be a vote on this is an embarrassment for Amazon’s leadership team. It demonstrates shareholders do not have confidence that company executives are properly understanding or addressing the civil and human rights impacts of its role in facilitating pervasive government surveillance.” “While we have yet to see the exact breakdown of the vote, this shareholder intervention should serve as a wake-up call for the company to reckon with the real harms of face surveillance and to change course,” he said. The ACLU in its letter said investors and shareholders hold the power to protect Amazon from its own failed judgment. Amazon pushed back the claims that the technology is inaccurate, and called on the U.S. Securities and Exchange Commission to block the shareholder proposal prior to its annual shareholder meeting. But ACLU blocked Amazon’s efforts to stop the vote, amid growing scrutiny of its product. According to an Amazon spokeswoman, the resolutions failed by a wide margin. Amazon has defended its work and said all users must follow the law. It also added a web portal for people to report any abuse of the service here. The votes were non-binding, allowing the company to reject the outcome of the vote. But it was inevitable that the votes were set to fail, as Amazon CEO Jeff Bezos holds 16% of its stock and voting rights. The company’s other four institutional shareholders, including The Vanguard Group, Blackrock, FMR and State Street, collectively hold about the same amount of voting rights as Bezos. The members of the Congress also met at the House Committee hearing on Wednesday, to discuss the civil rights impact of all facial recognition technology. Responding to the shareholder vote, Democratic U.S. Representative Jimmy Gomez said, “that just means that it’s more important that Congress acts.” Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon S3 is retiring support for path-style API requests; sparks censorship fears
Read more
  • 0
  • 0
  • 13846

article-image-googles-sidewalk-lab-smart-city-project-threatens-privacy-and-human-rights-amnesty-intl-ca-says
Fatema Patrawala
30 Apr 2019
6 min read
Save for later

Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says

Fatema Patrawala
30 Apr 2019
6 min read
Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is proposing a high-tech neighbourhood called Quayside for the city’s eastern waterfront. In March 2017, Waterfront Toronto had shared a Request for proposal for this project with the Sidewalk Labs team. It ultimately got approval by Oct 2017 and is currently led by Eric Schmidt, Alphabet Inc CEO and Daniel Doctoroff, Sidewalk Labs CEO. As per reports from Daneilla Barreto, a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights. https://twitter.com/AmnestyNow/status/1122932137513164801 The 12-acre smart city, which will be located between East Bayfront and the Port Lands, promises to tackle the social and policy challenges affecting Toronto: affordable housing, traffic congestion and the impacts of climate change. Imagine self-driving vehicles shuttling you around a 24/7 neighbourhood featuring low-cost, modular buildings that easily switch uses based on market demand. Picture buildings heated or cooled by a thermal grid that doesn’t rely on fossil fuels, or garbage collection by industrial robots. Underpinning all of this is a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. That last part about tracking human data has sparked concerns. Much ink has been spilled in the press about privacy protections and the issue has been raised repeatedly by citizens in two of four recent community consultations held by Sidewalk Toronto. They have proposed to build the waterfront neighbourhood from scratch, embed sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy, such as not selling personal information to third parties or using it for advertising purposes. Daneilla further argues that privacy was declared a human right and is protected under the Universal Declaration of Human Rights adopted by the United Nations in 1948. However, in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. Not all communities have agreed to participate in this initiative as marginalized and racialized communities will be affected most by surveillance. The Canadian Civil Liberties Association (CCLA) has threatened to sue Sidewalk Toronto project, arguing that privacy protections should be spelled out before the project proceeds. Toronto’s Mayor John Tory showed least amount of interest in addressing these concerns during a panel on tech investment in Canada at South by Southwest (SXSW) on March 10. Tory was present in the event to promote the city as a go-to tech hub while inviting the international audience at SXSW at the other industry events. Last October, Saadia Muzaffar announced her resignation from Waterfront Toronto's Digital Strategy Advisory Panel. "Waterfront Toronto's apathy and utter lack of leadership regarding shaky public trust and social license has been astounding," the author and founder of TechGirls Canada said in her resignation letter. Later that month, Dr. Ann Cavoukian, a privacy expert and consultant for Sidewalk Labs, put her resignation too. As she wanted all data collection to be anonymized or "de-identified" at the source, protecting the privacy of citizens. Why big tech really want your data? Data can be termed as a rich resource or the “new oil” in other words. As it can be mined in a number of ways, from licensing it for commercial purposes to making it open to the public and freely shareable.  Apparently like oil, data has the power to create class warfare, permitting those who own it to control the agenda and those who don’t to be left at their mercy. With the flow of data now contributing more to world GDP than the flow of physical goods, there’s a lot at stake for the different players. It can benefit in different ways as for the corporate, it is the primary beneficiaries of personal data, monetizing it through advertising, marketing and sales. For example, Facebook for past 2 to 3 years has repeatedly come under the radar for violating user privacy and mishandling data. For the government, data may help in public good, to improve quality of life for citizens via data--driven design and policies. But in some cases minorities and poor are highly impacted by the privacy harms caused due to mass surveillance, discriminatory algorithms among other data driven technological applications. Also public and private dissent can be discouraged via mass surveillance thus curtailing freedom of speech and expression. As per NY Times report, low-income Americans have experienced a long history of disproportionate surveillance, the poor bear the burden of both ends of the spectrum of privacy harms; are subject to greater suspicion and monitoring while applying for government benefits and live in heavily policed neighborhoods. In some cases they also lose out on education and job opportunities. https://twitter.com/JulieSBrill/status/1122954958544916480 In more promising news, today the Oakland Privacy Advisory Commission released 2 key documents one on the Oakland privacy principles and the other on ban on facial recognition tech. https://twitter.com/cfarivar/status/1123081921498636288 They have given emphasis to privacy in the framework and mentioned that, “Privacy is a fundamental human right, a California state right, and instrumental to Oaklanders’ safety, health, security, and access to city services. We seek to safeguard the privacy of every Oakland resident in order to promote fairness and protect civil liberties across all of Oakland’s diverse communities.” Safety will be paramount for smart city initiatives, such as Sidewalk Toronto. But we need more Oakland like laws and policies that protect and support privacy and human rights. One where we are able to use technology in a safe way and things aren’t happening that we didn’t consent to. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 13846

article-image-mapzen-an-open-source-mapping-platform-joins-the-linux-foundation-project
Amrata Joshi
29 Jan 2019
3 min read
Save for later

Mapzen, an open-source mapping platform, joins the Linux Foundation project

Amrata Joshi
29 Jan 2019
3 min read
Yesterday, the Linux Foundation announced that Mapzen, an open-source mapping platform is now a part of the Linux Foundation project. Mapzen focuses on the core components of map display such as search and navigation. It provides developers with an open software and data sets that are easy to access. It was launched in 2013 by mapping industry veterans in combination with urban planners, architects, movie makers, and video game developers. Randy Meech, former CEO of Mapzen and current CEO of StreetCred Labs, said, “Mapzen is excited to join the Linux Foundation and continue our open, collaborative approach to mapping software and data. Shared technology can be amazingly powerful, but also complex and challenging. The Linux Foundation knows how to enable collaboration across organizations and has a great reputation for hosting active, thriving software and data projects.” Mapzen’s open resources and projects are used to create applications or integrate them into other products and platforms. As Mapzen’s resources are all open source, developers can easily build platforms without the restrictions of data sets by other commercial providers. Mapzen is used by organizations such as Snapchat, Foursquare, Mapbox, Eventbrite, The World Bank, HERE Technologies, and Mapillary. With Mapzen, it is possible to take open data and build maps with search and routing services, upgrade their own libraries and process data in real-time. This is not possible with conventional mapping or geotracking services. Simon Willison, Engineering Director at Eventbrite said, “We’ve been using Who’s On First to help power Eventbrite’s new event discovery features since 2017. The gazetteer offers a unique geographical approach which allows us to innovate extensively with how our platform thinks about events and their locations. Mapzen is an amazing project and we’re delighted to see it joining The Linux Foundation.” Mapzen is operated in the cloud and on-premise by a wide range of organizations, including, Tangram, Valhalla and Pelias. Earlier this month, Hyundai joined Automotive Grade Linux (AGL) and the Linux Foundation for innovation through open source. It was a cross-industry effort for bringing automakers, suppliers and technology companies together to accelerate the development and adoption of an open software stack. Last year, Uber announced that it is joining the Linux Foundation as a Gold Member with an aim to support the open source community. Jim Zemlin, executive director at the Linux Foundation said, “Mapzen’s open approach to software and data has allowed developers and businesses to create innovative location-based applications that have changed our lives. We look forward to extending  Mapzen’s impact even further around the globe in areas like transportation and traffic management, entertainment, photography and more to create new value for companies and consumers.” According to the official press release, the Linux Foundation will align resources to advance Mapzen’s mission and it will further grow its ecosystem of users and developers. Newer Apple maps is greener and has more details Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool
Read more
  • 0
  • 0
  • 13841
article-image-microsoft-edge-mobile-browser-now-shows-warnings-against-fake-news-using-newsguard
Bhagyashree R
24 Jan 2019
3 min read
Save for later

Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard

Bhagyashree R
24 Jan 2019
3 min read
Microsoft Edge mobile browser is now flagging untrustworthy news sites with the help of a plugin named NewsGuard. Microsoft partnered with NewsGuard in August 2018 under its Defending Democracy Program. It was first supported as a downloadable plugin, but now Microsoft has started automatically installing this functionality on mobile version of Edge. Currently, it is an opt-in feature, which you can enable by going to the Settings menu . NewsGuard was founded by journalists Steven Brill and Gordon Crovitz. It evaluates a news site based on 9 specific criteria including their use of deceptive headlines, transparency regarding ownership and financing and gives users a color coded rating in green or red. Its business model is basically licensing its product to tech companies that aim to fight fake news. According to The Guardian, NewsGuard was warning users when they visited the Mail Online, “Proceed with caution: this website generally fails to maintain basic standards of accuracy and accountability.” Steve Brill says that NewsGuard takes complete responsibility for its verdicts and all complaints should be directed at his company rather than Microsoft. “They can blame us. And we’re happy to be blamed. Unlike the platforms we’re happy to be accountable. We want people to game our system. We are totally transparent. We are not an algorithm.” A spokesperson for Mail Online said to The Guardian, “We have only very recently become aware of the NewsGuard startup and are in discussions with them to have this egregiously erroneous classification resolved as soon as possible.” Though NewsGuard says that its verdict are taken by experienced journalists, there are still some issues that users have pointed out. One of the Hacker News user said that the very concern related to NewsGuard is that it flags unreliable content at the site level instead of the article level. Sharing what consequences this could impose he wrote, “It's obvious why that's necessary, but the result is a complete failure to deal with any source where quality varies widely. Fox's written reporting is sometimes quite good, but Glenn Beck's old videos are still posted under the same domain. The result is that NewsGuard happily puts a big green check-mark above a video declaring that the US is the only country in the world with birthright citizenship.” Though an opt-in feature is not a big deal, but with time this could become a default mode. We can’t deny that with time users could just see the green or red icon and based on that rating follow the website. Another Hacker News user says this could lead to something called “Truth as a service”, which means you are not using your own critical thinking and just take what the machine says. This fact is also supported by a study done by Gallup and the Knight Foundation, which surveyed 2,000 adults in U.S. They showed articles with and without the ratings and the result revealed that readers are more likely to trust articles that included the green icon in the address bar. Read the full story at The Guardian website. Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge
Read more
  • 0
  • 0
  • 13840

article-image-uber-introduces-base-web-an-open-source-unified-design-system-for-building-websites-in-react
Bhagyashree R
27 Apr 2019
2 min read
Save for later

Uber introduces Base Web, an open source “unified” design system for building websites in React

Bhagyashree R
27 Apr 2019
2 min read
Uber’s design and engineering team has introduced a universal system called Base Web design system, which was open sourced in 2018. Base Web is a suite of React components implementing the “base” design language quickly and easily creating web applications. At Uber, developers, product managers, operations teams, and other employees have to interact with different web applications on a daily basis. As all of these web applications function differently, it puts an additional overhead of learning how to interact with them most effectively. To reduce this time and effort, Uber wanted an universal system, which will act as “a foundation, a basis for initiating, evolving, and unifying web products”. Having a universal design system helps teams of engineers, designers, and product managers to easily work together. It also helps new engineers and designers to quickly get an hang of the possible components and design tokens used by a given engineering organization. One of the key reasons for introducing Base Web was to make it easy for developers to reuse components. Uber’s design and engineering team after talking to its engineers determined that they mainly needed access to: Style customizations The ability to modify the rendering of a component So, they introduced a unified overrides API, which comes with the following benefits: Eliminates top-level properties API overload There is no longer extra properties proxying inconsistently across the composable components Allows you to completely replace the components. Uber is now using Base Web across teams to create its web applications. “Open sourced in 2018 to enable others to experience the benefits of this solution, Base Web is now used across Uber, ensuring a seamless development experience across our web applications,” reads the announcement. To read the official announcement, visit Uber’s official website. Uber open-sources Peloton, a unified Resource Scheduler Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot Uber and Lyft drivers strike in Los Angeles  
Read more
  • 0
  • 0
  • 13836

article-image-microsoft-launches-quantum-katas-to-learn-quantum-language
Sugandha Lahoti
30 Jul 2018
2 min read
Save for later

Microsoft launches Quantum Katas, a programming project to learn Q#, its Quantum programming language

Sugandha Lahoti
30 Jul 2018
2 min read
Microsoft has announced Quantum Katas, a new portal for learning the quantum programming language Q#. This project contains a self-paced set of programming tutorials to teach interested developers the basic elements of Quantum computers as well as their Quantum programming language. Microsoft has been one of the forerunners in the Quantum computing race. Last year, Microsoft announced Q#, as a domain-specific programming language used for expressing quantum algorithms. Quantum Katas, as the name implies, has been derived from the popular programming technique Code Katas, which means an exercise to develop your skills through practice and repetition. Per Microsoft, each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task is based on code-filling; they may vary from one line at the start to sizable code fragments as the tutorial progresses. Developers are also provided reference materials to solve the tasks, both on quantum computing and on Q#. A testing framework is provided to validate solutions, thereby providing real-time feedback. Each kata covers one topic. The current topics are: Basic quantum computing gates. These tasks focus on the main single-qubit and multi-qubit gates used in quantum computing. Superposition. In these tasks, you learn how to prepare a certain superposition state on one or multiple qubits. Measurements. These tasks teach you to distinguish quantum states using measurements. Deutsch–Jozsa algorithm. In these tasks, you learn how to write quantum oracles which implement classical functions, and the Bernstein–Vazirani, and Deutsch–Jozsa algorithms. To use these katas, you need to install the Quantum Development Kit for Windows 10, MacOS or Linux.  The kit includes all of the pieces a developer needs to get started including the Q# programming language and compiler, a Q# library, a local quantum computing simulator, a quantum trace simulator and a Visual Studio extension. Microsoft Quantum Katas was developed after the results of the Q# coding contest that took place earlier this month, challenging more than 650 developers to solve Quantum related questions. You can read more about Quantum Katas on GitHub. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#?
Read more
  • 0
  • 0
  • 13829
article-image-elvis-pranskevichus-on-limitations-in-sql-and-how-edgeql-can-help
Bhagyashree R
10 May 2019
3 min read
Save for later

Elvis Pranskevichus on limitations in SQL and how EdgeQL can help

Bhagyashree R
10 May 2019
3 min read
Structure Query Language (SQL), which was once considered “not a serious language” by its authors, has now become a dominant query language for relational databases in the industry. Its battle-tested solutions, stability, portability makes it a reliable choice to perform operations on your stored data. However, it does has its share of weak points and that’s what Elvis Pranskevichus, founder of EdgeDB, listed down in a post titled “We Can Do Better Than SQL” published yesterday. He explained that we now need a “better SQL” and further introduced the EdgeQL language, which aims to address the limitations in SQL. SQL’s shortcomings Following are some of the shortcomings Pranskevichus talks about in his post: “Lack of Orthogonality” Orthogonality is a property, which means if you make some changes in one component, it will have no side effect on any other component. In the case of SQL, it means, allowing users to combine a small set of primitive constructs in a small number of ways. Orthogonality leads to a more compact and consistent design and not having it will lead to language which has many exceptions and caveats. Giving an example, Pranskevichus wrote, “A good example of orthogonality in a programming language is the ability to substitute an arbitrary part of an expression with a variable, or a function call, without any effect on the final result.” SQL does not permit such type of generic substitution. “Lack of Compactness” One of the side effects of not being orthogonal is lack of compactness. SQL is also considered to be “verbose” because of its goal of being an English-like language for catering to “non-professions”. “However, with the growth of the language, this verbosity has contributed negatively to the ability to write and comprehend SQL queries. We learnt this lesson with COBOL, and the world has long since moved on to newer, more succinct programming languages. In addition to keyword proliferation, the orthogonality issues discussed above make queries more verbose and harder to read,” wrote Pranskevichus in his post. “Lack of Consistency” Pranskevichus further adds that SQL is inconsistent in terms of both syntax and semantics. Additionally, there is a problem of standardization as well as different database vendors implement their own version of SQL, which often end up being incompatible with other SQL variants. Introducing EdgeQL With EdgeQL, Pranskevichus aims to provide users a language which is orthogonal, consistent, and compact, and at the same time works with the generally applicable relational model. In short, he aims to make SQL better! EdgeQL basically considers every value a set and every expression a function over sets. This design of the language allows you yo factor any part of an EdgeQL expression into a view or a function without changing other parts of the query. It has no null and a missing value is considered to be an empty set, which comes with the advantage of having only two boolean logic states. Read Pranskevichus’s original post for more details on EdqeQL. Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial] How to handle backup and recovery with PostgreSQL 11 [Tutorial]
Read more
  • 0
  • 0
  • 13819

article-image-microsoft-launches-a-free-version-of-its-teams-app-to-take-slack-head-on
Natasha Mathur
13 Jul 2018
3 min read
Save for later

Microsoft launches a free version of its Teams app to take Slack head on

Natasha Mathur
13 Jul 2018
3 min read
Yesterday, Microsoft announced a free version of the Microsoft Teams app posing heavy competition to its rival chat service Slack. The new Teams app comes loaded with key features such as unlimited free chat messages, app integrations, 300 user base limit, etc.  Whereas, Slack’s free version limits users to 10,000 searchable messages, making the Teams app a strong contender to Slack. Until now, Teams was only offered to clients that paid for Office 365 and its subscription. But, there is no need to be an Office 365 subscriber anymore to experience the power of the Teams app as stated by Lori Wright, General Manager of Microsoft 365 Teamwork. Let’s have a look at the features that the free version of the Teams app offers. Key Features The free version of Teams app is globally available in 40 different languages. It offers unlimited chat messages, app integrations, and search. It provides 10 GB team file storage as well as additional 2 GB per user. It has a built-in online office which includes Word, Excel, PowerPoint, OneNote along with SharePoint and OneDrive. There are native audio and video calling options for one on one meetings, small groups as well as for the full team. With over 140 business apps working with Teams, Microsoft will offer two additional features to the free version of Teams app, later this year. This includes background blurring and Inline message Translation in 36 languages. Background Blurring will intelligently blur out your screen’s background. So if you are conducting a video call from your kitchen table, there is no need to worry about the dirty dishes and the mess in the background as it won’t be visible.  Inline Message Translation allows people to chat in their native language and then translate their messages into English. But, there are still some key features that are available only in the paid version of the Teams app. With the paid version, all the video chats conducted within Team can be stored in the cloud. These are searchable and comes with automatic captioning. It also offers Email hosting through Exchange or Outlook. Microsoft is also planning on including facial recognition for the viewers to easily search what was said and who said it. Other than that, there will be a public preview of "live events” feature i.e. video broadcasts that can be transcribed, archived, and time coded which will be available shortly. Microsoft doesn't consider the free version of the Teams App as a lightweight version of the paid offering, which is the case with Slack. As mentioned earlier, Microsoft includes innumerable features that offer increased storage space, messaging search features along with letting users make group video and voice calls. Whereas, Slack's free version only allows calls between two people at a time. But, Slack is also working on improving its own in-app search, with automated suggestions. The main question is, whether the users who are already happy using Slack would want to switch to the Teams app or not. We can only wait to find out. Microsoft launches Surface Go tablet at just $399 Microsoft Azure IoT Edge is open source and generally available! Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure  
Read more
  • 0
  • 0
  • 13818
Modal Close icon
Modal Close icon