Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-raspberry-pi-opens-its-first-offline-store-in-england
Prasad Ramesh
08 Feb 2019
2 min read
Save for later

Raspberry Pi opens its first offline store in England

Prasad Ramesh
08 Feb 2019
2 min read
Raspberry Pi has opened a retail brick and mortar store in Cambridge, England. The mini computer maker has always sold its products online and ships to many countries. This offline store is a first for the company. Located at the Grand Arcade shopping, the Raspberry Pi store was started yesterday. It is not just a boring store with Raspberry Pi boards. Their collection includes boards, full setups with monitors, keyboards and mouses for demo, books, mugs and even soft toys with Raspberry branding. You can see some of the pictures of the new store here: https://twitter.com/Raspberry_Pi/status/1093454153534398464 A user shared his observation of the store on HackerNews: “I had a minute to check it out over lunch - most of the floorspace is dedicated to demonstrating what the raspberry pi can do at a high level. They had stations for coding, gaming, sensors, etc. but only ~1/4th of the space was devoted to inventory. They have a decent selection of Pis, sensor kits, and accessories. Not everyone working there was technical. This is definitely aimed at the general public.” Raspberry Pi has a strong online community with people coming up with various DIY projects. But the community is limited to people who have a keen interest on. More stores like this will help familiarize more people with Raspberry Pi. With branded books, demos, and toys this store is aimed to popularize the mini computer. Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV
Read more
  • 0
  • 0
  • 18772

article-image-aurora-a-self-driving-startup-secures-530-million-in-funding-from-amazon-sequoia-and-t-rowe-price-among-others
Natasha Mathur
08 Feb 2019
3 min read
Save for later

Aurora, a self-driving startup, secures $530 million in funding from Amazon, Sequoia, and T. Rowe Price among others

Natasha Mathur
08 Feb 2019
3 min read
Aurora, a self-driving startup, announced yesterday that it has raised over $530 million in Series B financing round led by Sequoia, an American venture capital firm. Apart from securing funds from Sequoia, the firm has received huge investments from Amazon and T. Rowe Price Associates. Aurora’s valuation has been now pushed to more than $2.5 billion. “Amazon’s unique expertise, capabilities, and perspectives will be valuable for us as we drive towards our mission. We are also looking forward to having T. Rowe Price with us on this journey as a long-term capital partner”, states the Aurora team on a Medium blog post. Amazon signed on as a partner for Toyota's new mobility alliance called e-palette concept, last year, that aimed to develop fully autonomous electric vehicles. Aurora might be Amazon’s first official investment in self-driving tech, though it has not been confirmed by Amazon yet. “We are always looking to invest in innovative, customer-obsessed companies, and Aurora is just that. Autonomous technology has the potential to help make the jobs of our employees and partners safer and more productive … we’re excited about the possibilities,” Amazon told Forbes. Other partners to join the funding round includes Lightspeed Venture Partners, Geodesic, Shell Ventures, and Reinvent Capital along with previous investors Greylock and Index Ventures, who made investments worth $90 million last year in Aurora. In total, Aurora has now secured worth $620M in funding over two rounds. Additionally, Carl Eschenbach, a partner at Sequoia, will be now joining Aurora’s existing board of directors, namely, Mike Volpi, Reid Hoffman, and Ian Smith. Eschenbach has past experience in operations, partnerships, and scaling companies, which will add immense value to growing Aurora. https://twitter.com/aurora_inno/status/1093557587025391622 Aurora was found in 2017 by Chris Urmson, who previously led Google's pioneering self-driving car program, and was later joined in by two other leaders in the autonomous driving industry - Drew Bagnell, formerly of Uber, and Sterling Anderson of Tesla. The firm has made incredible achievements in a short period of 2 years that it has stayed in the market. The company now has new offices in Palo Alto, San Francisco, and Pittsburgh in the US. Also, two of the world's largest automakers, namely, Volkswagen and Hyundai have offered their self-driving car software to Aurora, in addition to Chinese electric vehicle startup Byton. According to Mike Volpi, Aurora partner and board member, the company is “poised to win the self-driving market”. In the blog published yesterday, Volpi mentions how Aurora has a competitive advantage in the market because of its position as an independent company and the fact that it’s a full-stack provider to the Autonomous vehicle market. https://twitter.com/mavolpi/status/1093583791447171072 “With this newest investment, we will accelerate the development of the Aurora Driver and strengthen our team and ecosystem. The investment and strategic partners we bring on board today will help us build an enduring company”, writes the Aurora team. Anthony Levandowski announces Pronto AI and makes a coast-to-coast self-driving trip This self-driving car can drive in its imagination using deep reinforcement learning Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way
Read more
  • 0
  • 0
  • 10797

article-image-signal-introduces-optional-link-previews-to-enable-users-understand-whats-behind-a-url
Melisha Dsouza
08 Feb 2019
2 min read
Save for later

Signal introduces optional link previews to enable users understand what's behind a URL

Melisha Dsouza
08 Feb 2019
2 min read
Signal, the encrypted communication App for iOs and Android, recently announced optional link previews for the four most popular sites- Imgur, Reddit, Instagram, and YouTube. This will enable Signal users to see what’s behind a particular URL, while sharing content with their friends. According to Joshua Lund, the creator of Signal, the feature has been created in such a way that users can generate link previews while hiding the URL from the Signal service itself, thereby shielding their IP address from the previewed site, and obfuscating the true size of the preview image. Link previews will expose relevant pieces of the URL to the recipient. There are some sites like YouTube where the URL is a random string of letters, numbers, and symbols. A recipient will never know where the link goes until they click on the same. With link previews in place, users will get an idea of what they can expect when they click the link, just by looking at the preview. Users can disable this feature through settings or  by tapping the 'X' in the corner of the preview before hitting send. The process of sending a link preview follows 3 simple steps: The Signal app will establish a TCP connection using a privacy-enhancing proxy that will obscure a users IP address from the site that is being previewed. A TLS session will be negotiated directly between the app and the previewed site through the proxy. This will ensure that the Signal service never has access to the URL. The Signal app uses overlapping range requests to retrieve preview images. This will help the proxy service to see repeated requests for a fixed block size when media is transferred. Link previews may also alert users to avoid clicking on links that may contain malicious content. Users have taken this news well, commending the team on this new feature: https://twitter.com/Roderik_de_Pree/status/1093329997882949632 https://twitter.com/bcomenl/status/1093270187208523776 You can head over to Signal’s official blog to know more about this news. Signal to roll out a new privacy feature in beta, that conceals sender’s identity! Messaging app Telegram’s updated Privacy Policy is an open challenge SafeMessage: An AI-based biometric authentication solution for messaging platforms
Read more
  • 0
  • 0
  • 7304

article-image-googles-adiantum-a-new-encryption-standard-for-lower-end-phones-and-other-smart-devices
Melisha Dsouza
08 Feb 2019
3 min read
Save for later

Google’s Adiantum, a new encryption standard for lower-end phones and other smart devices

Melisha Dsouza
08 Feb 2019
3 min read
Google launched a new form of encryption called ‘Adiantum’, that is designed to secure data stored on lower-end smartphones and devices with insufficient processing power. In lieu of security, most Android phones have storage encryption enabled within them as a default feature. An exemption is made for phones with low processing power or with low-end hardware; where storage encryption is either off by default to improve performance, or not present at all. Adiantum is suitable for devices that lack dedicated ARM extensions for security. While a majority of new Android devices have hardware support for AES through the ARMv8 Cryptography Extensions, devices that use low-end processors such as the ARM Cortex-A7 do not support AES encryption, as it leads to poor and slow user experience. According to Eugene Liderman, director of mobile security strategy for Google’s Android security & privacy team, “Adiantum was built to run on phones and other smart devices that don’t have the specialized hardware to use current methods to encrypt locally stored data efficiently.”  With a hope to democratize encryption for all devices - including any low-power Linux-based device, from smartwatches to connected medical devices, Liderman says that “There will be no excuse for compromising security for the sake of device performance. Everyone should have privacy and security, regardless of their phone’s price tag.” How does Adiantum work? Google's Adiantum has been designed to encrypt local data without slowing down systems or increase the price of devices due to the implementation of additional hardware. Adiantum uses the ChaCha stream cipher in a length-preserving mode. It does so by adapting ideas from AES-based proposals for length-preserving encryption such as HCTR and HCH. On ARM Cortex-A7, Adiantum encryption and decryption on 4096-byte sectors is around 5x faster than AES-256-XTS. Adiantum can change any bit anywhere in the plaintext, and this will unrecognizably change all of the ciphertext, and vice versa. It hashes almost the entire plaintext using a keyed hash based on Poly1305 and a keyed hashing function called NH. It also hashes a value called the "tweak" which is used to ensure that different sectors are encrypted differently. This hash is used to generate a nonce for the ChaCha encryption. After the encryption is complete, the data is hashed again. This is arranged in a configuration known as a Feistel network. You can read the entire whitepaper detailing the encryption standard by Google software engineers Paul Crowley and Eric Biggers. The paper goes into further technical details relating to Adiantum. This is the second announcement made by Google in the spirit of Safer Internet day. Earlier this week, Google released a new Chrome extension called "Password Checkup" which checks if a user's credentials have been connected to past data leaks. You can head over to Google’s official blog to know more about Adiantum. Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets Google launches Live Transcribe, a free Android app to make conversations more accessible for the deaf Grafana 6.0 beta is here with new panel editor UX, google stackdriver datasource, and Grafana Loki among others
Read more
  • 0
  • 0
  • 12844

article-image-microsoft-joins-the-openchain-project-to-help-define-standards-for-open-source-software-compliance
Bhagyashree R
08 Feb 2019
2 min read
Save for later

Microsoft joins the OpenChain Project to help define standards for open source software compliance

Bhagyashree R
08 Feb 2019
2 min read
This Wednesday, OpenChain announced that Microsoft has joined them as a platinum member and a board member to help drive open source compliance. Microsoft is a new addition to the list of many huge companies joining the OpenChain project including Uber, Google, and Facebook. OpenChain General Manager, Shane Coughlan announcing the collaboration said in a blog post, “We’re thrilled that Microsoft has joined the project and welcome their expertise. Microsoft is a strong addition not only in terms of open source but also in standardization. Their membership provides great balance to our community of enterprise, cloud, automotive and silicon companies, allowing us to ensure the standard is suitable for any size company across any industry.” Why Microsoft joined OpenChain? While building new products and services, companies make use of existing open source software provided by their supply chains. Working on these large-scale projects makes it difficult for them to ensure that the license requirements are met in a timely and effective manner. To make open source compliance simpler and more consistent for companies, the OpenChain Project develops standards and training materials. As a part of the OpenChain community, Microsoft will now work more closely with it to create future standards that will help bring even greater trust to the open source ecosystem. It has helped the OpenChain community in developing its upcoming OpenChain Specification version. David Rudin, Assistant General Counsel at Microsoft, said, “Trust is key to open source, and compliance with open source licenses is an important part of building that trust. By joining the OpenChain Project, we look forward to working alongside the community to define compliance standards that help build confidence in the open source ecosystem and supply chain.” Not only with  OpenChain, but Microsoft is also working with the Linux Foundation’s TODO Group, which is an open group of companies who collaborate to create practices, tools, and other ways for running open source programs. It also joined the Open Invention Network (OIN) last year in October and made its entire patent portfolio available to OIN members. Read the full announcement on OpenChain's website. Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020 Microsoft Office 365 now available on the Mac App Store
Read more
  • 0
  • 0
  • 11999

article-image-the-january-2019-release-of-visual-studio-code-v1-31-is-out
Prasad Ramesh
08 Feb 2019
2 min read
Save for later

The January 2019 release of Visual Studio code v1.31 is out

Prasad Ramesh
08 Feb 2019
2 min read
The January 2019 release of Visual Studio code v1.31 is now available. This update brings Tree UI improvements, updated to the main menu, no reload on extension installation and other changes. Features of Visual Studio code v1.31 No more reloads on installing extensions This was one of the most requested features in the VS community. Now you don’t have to reload VS code whenever you install a new extension. Reload is not needed even when you uninstall an unactivated extension. Improvements to the Tree UI There is a new tree widget based on the already existing list widget. This tree UI was adopted in File Explorer, all debug trees, search, and peek references. Tree UI brings features like: Better keyboard navigation for faster access Hierarchical select all in a tree starting from the inner node the cursor is on Customizable indentation for trees Expand/collapse all tree nodes recursively Horizontal scrolling Improvements to menus There are more navigation actions in the Go menu so that they can be discovered easily. The cut command is now available on the Explorer context menu. Changes in the Editor Text selection is smarter. Search history is shown below the search bar in the References view. Long descriptions can be written using string arrays. Semantic selection In HTML, CSS/LESS/SCSS, and JSON semantic selection is now available. Reflow support in integrated terminal The terminal will now automatically wrap and unwrap text whenever it’s resized. New input variable The input variables were introduced in the previous milestone. In Visual Studio code 1.31, there is a new input variable called command. It runs an arbitrary command when an input variable is interpolated. Updated extension API documentation The VS Code API documentation was rewritten and then moved to its own table of contents. For more details on the improvements in Visual Studio code 1.31 January 2019, visit the release notes. Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019 Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! Neuron: An all-inclusive data science extension for Visual Studio
Read more
  • 0
  • 0
  • 12779
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-german-regulators-put-a-halt-to-facebooks-data-gathering-activities-and-ad-business
Sugandha Lahoti
08 Feb 2019
3 min read
Save for later

German regulators put a halt to Facebook’s data gathering activities and ad business

Sugandha Lahoti
08 Feb 2019
3 min read
On Thursday, German regulators, after a ruling, have ordered Facebook to put a stop to its data collection practices in Germany after they found that Facebook was exploiting consumers by requiring them to agree to data collection. This law was released by the German competition authority, called the Bundeskartellamt. Andreas Mundt, president of the Bundeskartellamt, said in a press release Thursday, “Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.” Facebook’s advertising model tracks its users from the Facebook app, WhatsApp and Instagram, collecting data on the sites and apps visited, also keeping a note of what they like, and where they shop. This data allows the company to serve ads that are more relevant to users’ interests. However, privacy advocates have maintained that Facebook does this without the user’s consent and don’t offer complete transparency. According to a press release published Thursday, German authority’s decision covers different data sources: Facebook-owned services like WhatsApp and Instagram can continue to collect data. However, assigning the data to Facebook user accounts will only be possible subject to the users’ voluntary consent. Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data. Collecting data from third-party websites and assigning them to a Facebook user account will also only be possible if users give their voluntary consent. Bundeskartellamt’s has given Facebook one month to appeal the decision to the Düsseldorf Higher Regional Court. In a blog post published on FB newsroom, Facebook confirmed that it would appeal the decision. “We disagree with their conclusions and intend to appeal so that people in Germany continue to benefit fully from all our services. The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.” The German ruling applies to all users of Facebook based in Germany. If the decision is confirmed, Facebook would have to come up with a solution within four months to meet Bundeskartellamt’s orders. “This is significant,” says Lina Khan, an antitrust expert affiliated with Columbia Law School. “The FCO’s theory is that Facebook’s dominance is what allows it to impose on users contractual terms that require them to allow Facebook to track them all over,” Khan says. “When there is a lack of competition, users accepting terms of service are often not truly consenting. The consent is a fiction.” Antitrust lawyer Thomas Vinje, a partner at Clifford Chance in Brussels, told Reuters that the Cartel Office ruling had potentially far-reaching implications. "This is a landmark decision, it’s limited to Germany but strikes me as exportable and might have a significant impact on Facebook's business model." Facebook faces multiple data-protection investigations in Ireland Snopes will no longer do fact-checking work for Facebook, ends its partnership with the firm. Stanford experiment results on how deactivating Facebook affects social welfare measures
Read more
  • 0
  • 0
  • 11079

article-image-google-open-sources-clusterfuzz-a-scalable-fuzzing-tool
Natasha Mathur
08 Feb 2019
2 min read
Save for later

Google open sources ClusterFuzz, a scalable fuzzing tool

Natasha Mathur
08 Feb 2019
2 min read
Google made its scalable fuzzing tool, called ClusterFuzz available as open source, yesterday. ClusterFuzz is used by Google for fuzzing the Chrome Browser, a technique that helps detect bugs in software by feeding unexpected inputs to a target program. For fuzzing to be effective, it should be continuous, done at scale, and integrated into the development process of a software project. ClusterFuzz can run on clusters with over 25,000 machines and can effectively highlight security and stability issues in software. It serves as the fuzzing backend for OSS-Fuzz, a service that Google released back in 2016. ClusterFuzz was earlier offered as free service to open source projects through OSS-Fuzz but is now available for anyone to use. ClusterFuzz comes with a variety of features that help integrate fuzzing into a software project's development process. Here are some of the key features in ClusterFuzz: Helps with accurate deduplication of crashes. Comes with a fully automatic bug filing and closing for issue trackers. Includes statistics for analyzing fuzzer performance, and crash rates. Comprises easy-to-use web interface for management and viewing crashes. ClusterFuzz has so far tracked more than 16,000 bugs in Chrome and over 11,000 bugs in more than 160 open source projects integrated with OSS-Fuzz. ClusterFuzz can detect bugs hours after they have been introduced and is capable of verifying the fix within a day. “We developed ClusterFuzz over eight years to fit seamlessly into developer workflows, and to make it dead simple to find bugs and get them fixed. Through open sourcing ClusterFuzz, we hope to encourage all software developers to integrate fuzzing into their workflows.”, states the ClusterFuzz team members. For more information, check out the ClusterFuzz’s official GitHub repository. Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets Transformer-XL: A Google architecture with 80% longer dependency than RNNs Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research
Read more
  • 0
  • 0
  • 14142

article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 19124

article-image-seattle-government-arrange-public-review-on-the-citys-surveillance-tech-systems
Savia Lobo
07 Feb 2019
3 min read
Save for later

Seattle government arrange public review on the city’s surveillance tech systems

Savia Lobo
07 Feb 2019
3 min read
Yesterday, the Seattle government announced that they are arranging for a public review on the different surveillance technologies used within the various Seattle departments. The City of Seattle Surveillance Ordinance was passed by the city’s council on 1st September 2017 and is designed to provide extended transparency to the council and the public when any new technology is acquired. It compels city departments to publish surveillance technology impact reports periodically and allows the public to comment. This is a Group 2 Surveillance on certain technologies including meter-reading devices, 911 call logging systems, and the Seattle police online crime reporting tool. A previous public comment period--for Group 1 Surveillance technologies--was held from October 8 to November 5, 2018, for a set of different technologies. The public comment period for this group will be up to March 5, 2019. There will also be a surveillance technology fair hosted by the city on Feb. 27 at the city hall. Technologies included in the Group 2 Surveillance review Seattle Fire Department's (SFD) computer-aided dispatch system https://youtu.be/AzKPaIHtbMs This includes information that 911 dispatchers gather for SFD calls. The system stores information like names and addresses, but that personal information is only available to select department personnel, SFD says. Acyclica https://youtu.be/PhwBUe1iUhE This is a service Seattle Department of Transportation (SDOT) uses to collect traffic data. SDOT describes, "Acyclica collects unique phone identifiers, called a MAC address, using a sensor installed inside of traffic control cabinets and immediately encrypts the data. Acyclica then hashes and salts the data, anonymizing it by assigning a set of numbers and letters, then adding [a] random set of additional characters." Electricity theft detection https://youtu.be/WSfrhYv6ngY Seattle City Light uses a variety of technologies to check whether people are stealing electricity. These can include low-tech items like binoculars on up to an "Ampstick," which measures voltage along power lines. Seattle Police 911 system: The 911 recorder https://youtu.be/KFShZY9t5Mg Similar to the SFD system, dispatchers collect personal data to send police to emergency situations. SPD also has a CAD dispatch system up for review. CopLogic https://youtu.be/A7JEwJGKvrc This is SPD's online crime reporting system. This is where citizens enter personal information if they've been the victim of a crime. According to a user comment on HackerNews, “Seattle uses WiFi MAC addresses to track traffic movements. While the data is currently hashed and anonymized, it wouldn't surprise me if this data is eventually processed and combined with CV technology (specifically license plate readers and facial recognition tech) to provide detailed information on the movements of individuals.” To know more about this announcement, visit Seattle.gov official website. Rights groups pressure Google, Amazon, and Microsoft to stop selling facial surveillance tech to government The DEA and ICE reportedly plan to turn streetlights to covert surveillance cameras, says Quartz report Conversational AI in 2018: An arms race of new products, acquisitions, and more
Read more
  • 0
  • 0
  • 9942
article-image-spotify-acquires-gimlet-and-anchor-to-expand-its-podcast-services
Prasad Ramesh
07 Feb 2019
2 min read
Save for later

Spotify acquires Gimlet and Anchor to expand its podcast services

Prasad Ramesh
07 Feb 2019
2 min read
Spotify, the well established music streaming service now seeks to expand into podcasting. Yesterday, Spotify announced the acquisition of two podcast firms namely Gimlet and Anchor. Spotify also plans to spend close to $500m more on acquisitions this year. The actual numbers for the acquisition are not revealed but Gimlet was reportedly acquired for around $200m. Gimlet is an award winning narrative podcast company with podcasts on a variety of topics. Anchor allows anyone to record, publish, and monetize podcasts with just a smartphone. This acquisition puts Spotify in the league of Apple and Google music who have also been investing in the world of podcasts. A podcast is an audio episode where people talk about something. The topics can range from education and politics to just fun talk shows. Daniel Ek, Spotify co-founder, and CEO said: “These acquisitions will meaningfully accelerate our path to becoming the world’s leading audio platform, give users around the world access to the best podcast content, and improve the quality of our listening experience as well as enhance the Spotify brand. We are proud to welcome Gimlet and Anchor to the Spotify team, and we look forward to what we will accomplish together.” The CEO explains in a blog post: “Users love having podcasts as a part of their Spotify experience. Our podcast users spend almost twice the time on the platform, and spend even more time listening to music.” Spotify is known for its great music recommendations and intuitive user interface. It also has an expanding library with deals adding more music from labels like T-series. However, its losses have increased despite the increase in the number of subscribers. With these podcast acquisitions with more to come, Spotify seems poised to add more revenue and grow its business beyond music into a complete audio streaming service platform. Spotify releases Chartify, a new data visualization library in python for easier chart creation cstar: Spotify’s Cassandra orchestration tool is now open source! Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer
Read more
  • 0
  • 0
  • 11149

article-image-instacart-changes-its-tips-stealing-policy-after-facing-workers-backlash
Natasha Mathur
07 Feb 2019
4 min read
Save for later

Instacart changes its “tips stealing” policy after facing workers backlash

Natasha Mathur
07 Feb 2019
4 min read
InstaCart, a popular online grocery delivery service in the US, has been hit with a lawsuit, filed by Los Angeles-based worker Sarah Lozano and other Instacart shoppers (workers who handpick items for customers and deliver them), last week. The lawsuit has been filed over concerns related to Instacart’s mishandling of tips and treatment of its delivery drivers. Instacart also shifted to a new payment model last year in November, publicly announcing that the change is intended to “provide clearer and more consistent earnings, and enhance the shopper experience”. The change was not taken well by the workers’ who complained of pay cuts and “tips stealing” after the company adopted the new pay model. As per the new model, Instacart guarantees its workers a minimum per-job payment of $10, so during cases when the total pay of workers’ hit below $10, Instacart has to pay extra to make the overall pay reach $10 minimum, reports NBC News. However, as per the workers’ complaints, the tip paid by the customers’ is being used by Instacart to supplement the batch payments to reach the total $10 payment value for workers. After the $10 minimum payment is reached, the tip amount helps boost the worker’s take-home pay. The lawsuit filed in California’s Superior Court, states that Instacart “intentionally and maliciously misappropriated gratuities in order to pay plaintiff’s wages even though Instacart maintained that 100 percent of customer tips went directly to shoppers. Based on this representation, Instacart knew customers would believe their tips were being given to shoppers in addition to wages, not to supplement wages entirely”. This is not the first time when Instacart has come under the bus regarding payment related issues. In November 2017, Instacart paid $4.6 million to settle a class action lawsuit with its workers. The lawsuit was filed by independent contractors working for the firm who claimed 18 violations against it, including unfair tip pooling. A workers’ organization, called Washington Workers’, shared screenshots on its blog that it received by Instacart shoppers across the country, stating that the screenshots are proofs of how “messed up” the company’s pay model is.                                                                           Working Washington                                                        The first screenshot (left) above shows Instacart paying just $1.23 for a job that took 62 minutes, and using customer’s tip to hit the minimum $10. If the customer had tipped $0, Instacart would have had to pay more. This reduces the company’s cost without adding anything to the worker’s income.  Similarly, the second screenshot (right) shows a job that took 127 minutes to complete, where Instacart paid just $6.72 as customer tipped $25. Many workers also took to Reddit and other online forums to raise their voice against Instacart’s paying practices. Apoorva Mehta, Founder & CEO of Instacart, published a post on Medium, today, where she states that although the changed pay model was designed to improve transparency, the company fell short on delivering its promises to the workers. Instacart is now reversing its pay model and has launched new measures to “more fairly and competitively compensate” their shoppers. As per the new pay model, tips will always be separate from Instacart’s contribution to shopper compensation, and Instacart will “retroactively compensate” shoppers in case there are tips included in minimum, among other new changes.“While our intention was to increase the guaranteed payment for small orders, we understand that the inclusion of tips was misguided”, states Mehta. Other than Instacart, another popular online grocery delivery services in the US, called DoorDash has come under the fire for similar reasons as Instacart. Although, DoorDash FAQ page states, “Dashers always receive 100% of tips...In addition to 100% of the tip, Dashers will always receive at least $1 from DoorDash”, DoorDash hasn’t explicitly reacted yet to the recent uproar. Tech Workers Coalition (TWC), a non-profit coalition of tech-workers, also spoke out in support of the Instacart workers on Twitter, while bashing DoorDash for remaining silent: https://twitter.com/techworkersco/status/1093272671398244355 https://twitter.com/techworkersco/status/1093227213711798272 Public reaction to the news has been largely negative towards Instacart, with people supporting Instacart shoppers for raising their voice against the firm’s unfair pay model: https://twitter.com/dhh/status/1093244878627225602 https://twitter.com/NicoleGarton_/status/1093246126080151553 https://twitter.com/Ithen_thought/status/1092521656143265798 https://twitter.com/BabiesFree/status/1092526715539152897 Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Amazon faces increasing public pressure as HQ2 plans go under the scanner in New York
Read more
  • 0
  • 0
  • 10727

article-image-google-expands-its-blockchain-search-tools-adds-six-new-cryptocurrencies-in-bigquery-public-datasets
Sugandha Lahoti
07 Feb 2019
2 min read
Save for later

Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets

Sugandha Lahoti
07 Feb 2019
2 min read
Google’s BigQuery Public Datasets program, has added six new cryptocurrencies to expand it’s blockchain search tools. Including Bitcoin and Ethereum which were added last year, the total count is now eight. The six new cryptocurrency blockchain datasets are Bitcoin Cash, Dash, Dogecoin, Ethereum Classic, Litecoin, and Zcash. BigQuery Public dataset is stored in BigQuery and made available to the general public through the Google Cloud Public Dataset Program. The blockchain related datasets consist of the blockchain’s transaction history to help developers better understand cryptocurrency. Apart from adding new datasets, Google has released a set of queries and views that map all blockchain datasets to a double-entry book data structure that enables multi-chain meta-analyses, as well as integration with conventional financial record processing systems. A Blockchain ETL ingestion framework helps to update all datasets every 24 hours via a common codebase. This results in a higher latency for loading Bitcoin blocks into BigQuery. It also leads to ingesting additional BigQuery datasets with less effort. It also means that a low-latency loading solution can be implemented once and can be used to enable real-time streaming transactions for all blockchains. With this release, the blockchain data sets have been standardized into a "unified schema," meaning the data is structured in a uniform, easy-to-access way. They’ve also included more data, such as script op-codes. Having these scripts available for Bitcoin-like datasets enables more advanced analyses. They have also created some views that abstract the blockchain ledger to be presented as a double-entry accounting ledger. This helps to further interoperate with Ethereum and ERC-20 token transactions. Allen Day, Cloud Developer Advocate, Google Cloud Health AI, writes in a blog post, “ We hope these new public datasets encourage you to try out BigQuery and BigQuery ML for yourself. Or, if you run your own enterprise-focused blockchain, these datasets and sample queries can guide you as you form your own blockchain analytics.” Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Stable version of OpenZeppelin 2.0, a framework for smart blockchain contracts, released! Is Blockchain a failing trend or can it build a better world? Harish Garg provides his insight.
Read more
  • 0
  • 0
  • 15894
article-image-react-16-8-releases-with-the-stable-implementation-of-hooks
Bhagyashree R
07 Feb 2019
2 min read
Save for later

React 16.8 releases with the stable implementation of Hooks

Bhagyashree R
07 Feb 2019
2 min read
Yesterday, Dan Abramov, one of the React developers, announced the release of React 16.8, which comes with the feature everyone was waiting for, “Hooks”. This feature first landed in React 16.7-alpha last year and now it is available in this stable release. This stable implementation of React Hooks is available for React DOM, React DOM Server, React Test Renderer, and React Shallow Renderer. Hooks are also supported by React DevTools and the latest versions of Flow, and TypeScript. Developers are recommended to enable a new lint rule called eslint-plugin-react-hooks that enforces best practices with Hooks. It will also be included in the Create React App tool by default. What are Hooks? At the React Conf 2018, Sophie Alpert and Dan Abramov explained what are the current limitations in React and how they can be solved using Hooks. React Hooks are basically functions that allow you to “hook into” or use React state and other lifecycle features via function components. Hooks comes with various advantages such as enabling easy reuse of React components, splitting related components, and use React without classes. What’s new in React 16.8? Currently, Hooks do not support all use cases for classes, but soon it will. Only two methods, that is, getSnapshotBeforeUpdate() and componentDidCatch(), don’t have their Hooks API counterpart. A new API named ReactTestUtils.act() is introduced in this stable release. This API ensures that the behavior in your tests matches what happens in the browser more closely. Dan Abramov in a post recommended wrapping code rendering and triggering updates to their components into act() calls. Other changes include: The useReducer Hook lazy initialization API is improved Support for synchronous thenables is added to React.lazy() Components are rendered twice with Hooks in Strict Mode (DEV-only) similar to class behavior A warning is shown when returning different hooks on subsequent renders The useImperativeMethods Hook is renamed to useImperativeHandle The Object.is algorithm is used for comparing useState and useReducer values To use Hooks, you need to update all the React packages to 16.8 or higher. On a side note, React Native will support Hooks starting from React Native 0.59 release. Read all the updates in React 16.8 on their official website. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering”
Read more
  • 0
  • 0
  • 17679

article-image-microsoft-is-planning-to-bring-xbox-live-gaming-to-android-ios-nintendo-switch-and-more
Sugandha Lahoti
07 Feb 2019
2 min read
Save for later

Microsoft is planning to bring Xbox Live gaming to Android, iOS, Nintendo Switch, and more

Sugandha Lahoti
07 Feb 2019
2 min read
Microsoft is reportedly planning to bring Xbox Live cross-platform gaming features to PC, Xbox, iOS, Android, and Nintendo Switch. This news was first reported by Windows Central via a GDC 2019 schedule on Xbox Live. “Xbox Live is expanding from 400 million gaming devices and a reach to over 68 million active players to over 2 billion devices with the release of our new cross-platform XDK,” says the GDC listing. The GDC session will also offer a first look at the SDK to enable game developers to connect players between iOS, Android, and Switch in addition to Xbox and any game in the Microsoft Store on Windows PCs. Until now, Microsoft has reserved Xbox Live support on iOS, Android, and Nintendo Switch platforms for its own games, but now, Microsoft is aiming to bring Xbox Live integration to even more gaming titles. This is a part of Microsoft’s gaming mission to bring software, services, and games to players on other platforms aside from its traditional PC and Xbox markets. Per Windows central, “Developers will be able to bake cross-platform Xbox Live achievements, social systems, and multiplayer, into games built for mobile devices and Nintendo Switch, as part of its division-wide effort to grow Xbox Live's user base.” For developers, this would mean allowing “communities to mingle more freely across platforms. Combined with PlayFab gaming services, this means less work for game developers and more time to focus on making games fun,” says the GDC listing. Microsoft is also building a xCloud game streaming service that will stream Xbox games to PCs, consoles, and mobile devices later this year. Twitter users are fairly excited about this news. https://twitter.com/Avers_G4GMedia/status/1091623967088144384 https://twitter.com/NintendoSwitchC/status/1092560268956233728 https://twitter.com/TannithArt/status/1092675726996844544 Microsoft announces Project xCloud, a new Xbox game streaming service. Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Microsoft plans to use Windows ML for Game development
Read more
  • 0
  • 0
  • 15958
Modal Close icon
Modal Close icon