Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-announces-stadia-a-cloud-based-game-streaming-service-at-gdc-2019
Bhagyashree R
20 Mar 2019
3 min read
Save for later

Google announces Stadia, a cloud-based game streaming service, at GDC 2019

Bhagyashree R
20 Mar 2019
3 min read
Yesterday, at the ongoing Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/GoogleStadia/status/1108097130147860480 GDC 2019 is a five-day event, which commenced on 18th of this month at San Francisco, CA. It is the world’s largest game industry event which brings together 28,000 attendees to share ideas and discuss the future of the gaming industry. What is Stadia? Phil Harrison, Google’s Vice President, and GM, announcing the game streaming platform said, “Our ambition is far beyond a single game. The power of instant access is magical, and it's already transformed the music and movie industries." Stadia is a cloud-based game streaming platform that aims to bring together, gamers, YouTube broadcasters, and game developers “to create a new experience”. The games get streamed from any data center to any device that can connect to the internet like TV, laptop, desktop, tablet, or mobile phone. With this, gamers will be able to access their games anytime and virtually on any screen. And, game developers will be able to use nearly unlimited resources for developing games. Since all the graphics processing happens on off-site hardware, there will be little stress on your local hardware. The demo that Google shared at GDC currently streams video at 1080p, 60 frames per second. At launch, Stadia will come with up to 4K resolution and 60 frames per second with approximately 25Mbps of bandwidth. In the future, Google is planning to offer 8K resolution and 120 frames per second. Google, in partnership with AMD, is building a custom GPU for its data centers, which will deliver 10.7 teraflops of power.  Also, each Stadia instance will be powered by a custom 2.7GHz x86 processor with 16GB of RAM. Stadia Controller At GDC, Google also talked about a dedicated controller for Stadia that directly connects to a game session in the cloud through WiFi. The controller provides a button for capturing, saving, and sharing gameplay in up to 4K resolution. It also comes integrated with Google Assistant and a built-in microphone. According to a blog post shared by Google, it is not guaranteed that the controller will be offered for sale as the device is not yet authorized by the Federal Communications Commission. While unveiling the game-streaming service, Google did not reveal any details on the pricing. Also, the details regarding when exactly we can expect this service to reach the gamers and developers are unknown. To know more in detail about Stadia, check out the official announcement on Google’s blog post. Google is planning to bring Node.js support to Fuchsia Google to be the founding member of CDF (Continuous Delivery Foundation) Google announces the stable release of Android Jetpack Navigation
Read more
  • 0
  • 0
  • 20265

article-image-microsoft-introduces-remote-development-extensions-to-make-remote-development-easier-on-vs-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Microsoft introduces Remote Development extensions to make remote development easier on VS Code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Microsoft announced the preview of Remote Development extension pack for VS Code to enable developers to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. https://twitter.com/code/status/1124016109076799488 Currently, developers will need to use the Insiders build for remote development until the stable version is available. The Insiders builds are the versions that are shipped daily with latest features and bug fixes. Why these VS Code extensions are needed? Developers often choose containers or remote virtual machines configured with specific development and runtime stacks as their development environment. This is an optimal choice because configuring such development environments locally could be too difficult or sometimes even impossible. Data scientists also require remote environments to do their work efficiently. They build and train data models and to do that they need to analyze large datasets. This demands massive storage and compute service, which a local machine can hardly provide. One option to solve this problem is using Remote Desktop but it can be sometimes laggy. Developers often use Vim and SSH or local tools with file synchronization, but these can also be slow and error-prone. There are browser-based tools that can be used in some scenarios, but they lack the richness and familiarity that desktop tools provide. VS Code Remote Development extensions pack Looking at these challenges, the VS Code team came up with a solution that suggested that VS Code should run in two places at once. One instance will run the developer tools locally and the other will connect to a set of development services running remotely in the context of a physical or virtual machine. Following are three extensions for working with remote workspaces: Remote-WSL Remote - WSL allows you to use WSL as a full development environment directly from VS Code. It runs commands and extensions directly in WSL so developers don’t have to think about pathing issues, binary compatibility, or other cross-OS challenges. With this extension, developers will be able to edit files located in WSL or the mounted Windows filesystem and also run and debug Linux-based applications on Windows. Remote-SSH Remote - SSH allows you to open folders or workspaces hosted on any remote machine, VM, or container with a running SSH server. It directly runs commands and other extensions on the remote machine so you don’t need to have the source code on your local machine. It enables you to use larger, faster, or more specialized hardware than your local machine. You can also quickly switch between different remote development environments and safely make updates. Remote-Containers Remote - Containers allows you to use a Docker container as your development container. It starts or attaches to a development container, which is running a well-defined tool and runtime stack. All your workspace files are copied or cloned into the container, or mounted from the local file system. To configure the development container you can use a ‘devcontainer.json’ file. To read more in detail, visit Microsoft’s official website. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository Microsoft employees raise their voice against the company’s misogynist, sexist and racist acts  
Read more
  • 0
  • 0
  • 20265

article-image-tesla-software-version-10-0-adds-smart-summon-in-car-karaoke-netflix-hulu-and-spotify-streaming
Sugandha Lahoti
27 Sep 2019
3 min read
Save for later

Tesla Software Version 10.0 adds Smart Summon, in-car karaoke, Netflix, Hulu, and Spotify streaming

Sugandha Lahoti
27 Sep 2019
3 min read
Tesla rolled out a new software version for its Cars - Tesla Software Version 10.0 with a host of features for Model S, Model X, and Model 3 owners. Software v10 has in-car karaoke, entertainment services like Netflix and Hulu, as well as Spotify Premium account access. https://youtu.be/NfMtONBK8dY Probably the most interesting feature is the Smart Summon. If you are a customer who has purchased Full Self-Driving Capability or Enhanced Autopilot, you are eligible for the update. With this feature, you can summon your car or get it to navigate a parking lot, as long as the car is within your line of sight. This feature, Tesla says, is perfect, “if you have an overflowing shopping cart, are dealing with a fussy child, or simply don’t want to walk to your car through the rain.” Tesla’s updated file system now separates videos captured by the car’s camera when in Dashcam and Sentry Mode. They will be auto-deleted when there’s a need to free up storage. Tesla Software Version 10.0 is jam-packed with entertainment options With Tesla Theatre, you can stream Netflix, YouTube, and Hulu or Hulu + Live TV right from your car while parked. Chinese customers have iQiyi and Tencent Video access. Spotify Premium account access is also available in all supported markets, in addition to Slacker Radio and TuneIn. For China customers, Tesla has the Ximalaya service for podcasts and audiobooks. Additionally, you have a karaoke system “Car-aoke”, which includes a library of music and song lyrics that passengers and drivers can use parked or driving. Tesla also added some new navigation features that suggest interesting restaurants and sightseeing opportunities that are within your car’s range. Maps are also improved so that search results will be sorted based on the distance to each destination. Tesla Arcade has a new Cuphead port. Cuphead is a run and gun video game developed and published by StudioMDHR. Using a USB controller, single-player and co-op modes are available to play in the Tesla Edition of Cuphead. Tesla’s new software update has got Twitteratis thrilled. https://twitter.com/mortchad/status/1177301454446460933 https://twitter.com/ChrisJCav/status/1177304907197534208 https://twitter.com/A13Frank/status/1177339094835191808 To receive this update as quickly as possible, Tesla says, make sure your car is connected to Wi-Fi. You’ll automatically receive Version 10.0 when it’s ready for your car based on your location and vehicle configuration — there is no need to request the update. Tesla reports a $408 million loss in its Q2 earnings call; CTO and co-founder, JB Straubel steps down Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”. Tesla is building its own AI hardware for self-driving cars
Read more
  • 0
  • 0
  • 20254

article-image-update-pandemic-driving-more-ai-business-researchers-fighting-fraud-cure-posts-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Update: Pandemic Driving More AI Business; Researchers Fighting Fraud ‘Cure’ Posts  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By AI Trends Staff   The impact of the coronavirus pandemic around AI has many shades, from driving higher rates of IT spending on AI, to spurring researchers to fight fraud “cure” claims on social media, and hackers seeking to tap the medical data stream   IT leaders are planning to spend more on AI/ML, and the pandemic is increasing demand for people with related job skills, according to the survey of over 100 IT executives with AI initiatives going on at companies spending at least $1 million annually on AI/ML before the pandemic. The survey was conducted in August by Algorithmia, a provider of ML operations and management platforms.  Some 50% of respondents reported they are planning to spend more on AI/ML in the coming year, according to an account based on the survey from TechRepublic.   A lack of in-house staff with AI/ML skills was the primary challenge for IT leaders before the pandemic, according to 59% of respondents. The most important job skills coming out of the pandemic are going to be security (69%), data management (64%), and systems integration (62%).   Diego Oppenheimer, CEO of Algorithmia “When we come through the pandemic, the companies that will emerge the strongest will be those that invested in tools, people, and processes that enable them to scale delivery of AI and ML-based applications to production,” stated Diego Oppenheimer, CEO of Algorithmia, in a press release. “We believe investments in AI/ML operations now will pay off for companies sooner than later. Despite the fact that we’re still dealing with the pandemic, CIOs should be encouraged by the results of our survey.”     Researchers Tracking Increase in Fraudulent COVID-19 ‘Cure’ Posts   Legitimate businesses are finding opportunities from COVID-19, and so are the scammers. Researchers at UC San Diego are studying the increase of fraudulent posts around COVID-19 “cures” being posted on social media.   In a new study published in the Journal of Medical Internet Research Public Health and Surveillance on August 25, 2020, researchers at University of California San Diego School of Medicine found thousands of social media posts on two popular platforms — Twitter and Instagram — tied to financial scams and possible counterfeit goods specific to COVID-19 products and unapproved treatments, according to a release from UC San Diego via EurekAlert  “We started this work with the opioid crisis and have been performing research like this for many years in order to detect illicit drug dealers,” stated Timothy Mackey, PhD, associate adjunct professor at UC San Diego School of Medicine and lead author of the study. “We are now using some of those same techniques in this study to identify fake COVID-19 products for sale. From March to May 2020, we have identified nearly 2,000 fraudulent postings likely tied to fake COVID-19 health products, financial scams, and other consumer risk.”   The first two waves of fraudulent posts focused on unproven marketing claims for prevention or cures and fake testing kits. The third wave of fake pharmaceutical treatments is now materializing. Prof. Mackey expects it to get worse when public health officials announce development of an effective vaccine or other therapeutic treatments.   The research team identified suspect posts through a combination of Natural Language Processing and machine learning. Topic model clusters were transferred into a deep learning algorithm to detect fraudulent posts. The findings were customized to a data dashboard in order to enable public health intelligence and provide reports to authorities, including the World Health Organization and U.S. Food & Drug Administration (FDA).   “Criminals seek to take advantage of those in need during times of a crisis,” Mackey stated.   Sandia Labs, BioBright Working on a Better Way to Secure Critical Health Data    Complementing the scammers, hackers are also seeing opportunity in these pandemic times. Hackers that threaten medical data are of particular concern.    One effort to address this is a partnership between Sandia National Laboratories and the Boston firm BioBright to improve the security of synthetic biology data, a new commercial field.   Corey Hudson, senior member, technical staff, Sandia Labs “In the past decade, genomics and synthetic biology have grown from principally academic pursuits to a major industry,” said computational biology manager Corey Hudson, senior member of the technical staff at Sandia Labs in a press release. “This shift paves the way toward rapid production of small molecules on demand, precision healthcare, and advanced materials.”  BioBright is a scientific lab data automation company, recently acquired by Dotmatics, a UK company working on the Lab of the Future. The two companies are working to develop a better security model since currently, large volumes of data about the health and pharmaceutical information of patients are being handled with security models developed two decades ago, Hudon suggested.  The situation potentially leaves open the risk of data theft or targeted attack by hackers to interrupt production of vaccines and therapeutics or the manufacture of controlled, pathogenic, or toxic materials, he suggested.  “Modern synthetic biology and pharmaceutical workflows rely on digital tools, instruments, and software that were designed before security was such an important consideration,” stated Charles Fracchia, CEO of BioBright. The new effort seeks to better secure synthetic biology operations and genomic data across industry, government, and academia.  The team is using Emulytics, a research initiative developed at Sandia for evaluating realistic threats against critical systems, to help develop countermeasures to the risks.  C3.ai Sponsors COVID-19 Grand Challenge Competition with $200,000 in Awards  If all else fails, participate in a programming challenge and try to win some money.  Enterprise AI software provider C3.ai is inviting data scientists, developers, researchers and creative thinkers to participate in the C3.ai COVID-19 Grand Challenge and win prizes totaling $200,000.    The judging panel will prioritize data science projects that help to understand and mitigate the spread of the virus, improve the response capabilities of the medical community, minimize the impact of this disease on society, and help policymakers navigate responses to COVID-19.  C3.ai will award one Grand Prize of $100,000, two second-place awards of $25,000 each, and four third-place awards of $12,500 each.   “The C3.ai COVID-19 Grand Challenge represents an opportunity to inform decision makers at the local, state, and federal levels and transform the way the world confronts this pandemic,” stated Thomas M. Siebel, CEO of C3.ai, in a press release.  “As with the C3.ai COVID-19 Data Lake and the C3.ai Digital Transformation Institute, this initiative will tap our community’s collective IQ to make important strides toward necessary, innovative solutions that will help solve a global crisis.”   The competition is now open. Registration ends Oct. 25 and final submissions are due Nov. 18, 2020. By Dec. 9, C3.ai will announce seven competition winners and award $200,000 in cash prizes to honorees.  Judges include Michael Callagy, County Manager, County of San Mateo; S. Shankar Sastry, Professor of Electrical Engineering & Computer Science, UC Berkeley; and Zico Kolter, Associate Professor Computer Science, Carnegie Mellon University.   Launched in April 2020, the C3.ai COVID-19 Data Lake now consists of 40 unique datasets, said to be among the largest unified, federated image of COVID-19 data in the world.  Read the source articles and information at TechRepublic, from UC San Diego via EurekAlert, a press release from Sandia Labs, a press release from C3.ai about the COVID-19 Grand Challenge. 
Read more
  • 0
  • 0
  • 20242

article-image-microsoft-brings-postgresql-extension-and-sql-notebooks-functionality-to-azure-data-studio
Natasha Mathur
19 Mar 2019
4 min read
Save for later

Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio

Natasha Mathur
19 Mar 2019
4 min read
Microsoft announced the March release of Azure Data Studio, yesterday. This latest Azure Data Studio release explores features such as preview support for PostgreSQL in Azure Data Studio,  corresponding preview PostgreSQL extension in Visual Studio Code (VS Code), and SQL Notebooks, among others. What’s new in Azure Data Studio? PostgreSQL extension for Azure Data Studio There’s new preview support for PostgreSQL in Azure Data Studio. The preview support adds on-premises and Azure Database for PostgreSQL to its existing support for SQL Server, Azure SQL Database, Azure SQL Managed Instance, Azure SQL Data Warehouse, and SQL Server 2019 big data clusters. The Azure Data Studio extension for PostgreSQL comprises a Tools API service that offers data management and high performant query execution capabilities. Azure Data Studio also provides a modern, keyboard-focused PostgreSQL coding experience, that simplifies your everyday tasks. Users can also run on-demand SQL queries now such as the view and save results as text, JSON, or Excel. There’s also an extension marketplace for Azure Data Studio for developers that help them build and contribute back into the open source ecosystem. Microsoft team states that it is making the new PostgreSQL extension experience open source under the MIT license. This will allow the users to connect to all PostgreSQL databases in case they’re running on Azure (Azure Database for PostgreSQL). SQL Notebooks Using SQL Notebooks, you can easily interleave the written instructions, analysis, diagrams, and the animated GIFs using markdown. You can then add code cells with the SQL code to be executed. The SQL Notebook functionality has been built into the base of Azure Data Studio product and requires no additional extensions to connect to servers and execute the SQL result sets. These SQL notebooks can be used like any other regular query editor. You can get started with SQL notebooks just like a regular query editor. In case you’d like to use other languages such as Python, R, or Scala, you’ll be prompted to install other additional dependencies. PowerShell extension PowerShell extension from Virtual Studio (VS) Code is now featured in the Azure Data Studio marketplace. The new PowerShell extension aligns with the other automation scenarios used by the database administrators and developers. There’s an integrated terminal in Azure Data Studio that makes it easy for users to integrate PowerShell experiences with data. SQL Server dacpac extension Microsoft team mentions that it has been trying to improve the Data-Tier Application Wizard in Azure Data Studio after receiving feedback from the community. Originally shipped with the SQL Server Import extension, this feature will now be shipped as a separate extension. This is because the team plans to bring more features making it easy to use dacpacs and bacpacs in Azure Data Studio. This extension will also be included in the Admin pack for SQL Server, an extension pack that lets you quickly download the popular features from SQL Server Management Studio. Other Changes Community extension highlight: Queryplan.show The Queryplan.show extension adds integration support to visualize query plans using the community extension Queryplan.show. Visual Studio Code Refresh from 1.26.1 to 1.30.2 There have been a few refresh updates from the July release (1.26.1) to the November release (1.30.2) of VS Code. Highlights are as follows: New Settings editor UI making it easy to modify Azure Data Studio settings. Multiline search improvements. Better macOS support. SQL Server 2019 Preview extension The Microsoft team has been moving features from the SQL Server 2019 preview extension into the core Azure Data Studio tool for decades now. Here is a summary of the features moved into the core tool: Jupyter Notebook support has been moved to Azure Data Studio. Bug fixes in the External Data wizards: New schemas typed into the table mapping controls were getting lost. This is fixed now. Oracle type mappings have been updated. For more information, check out the official March release notes for Azure Data Studio. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report Microsoft Azure now supports NVIDIA GPU Cloud (NGC) Microsoft Azure’s new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 20228

article-image-spotify-releases-chartify-a-new-data-visualization-library-in-python-for-easier-chart-creation
Natasha Mathur
19 Nov 2018
2 min read
Save for later

Spotify releases Chartify, a new data visualization library in python for easier chart creation

Natasha Mathur
19 Nov 2018
2 min read
Spotify announced, last week, that it has come out with Chartify, a new open source Python data visualization library, making it easy for data scientists to create charts. It comes with features such as concise and user-friendly syntax and consistent data formatting among others. Let’s have a look at these features in this new library. Concise and user-friendly syntax Despite the abundance of tools such as Seaborn, Matplotlib, Plotly, Bokeh, etc, used by data scientists at Spotify, chart creation has always been a major issue in the data science workflow. Chartify solves that problem as the syntax in it is considerably more concise and user-friendly, as compared to the other tools. There are suggestions added in the docstrings, allowing users to recall the most common formatting options. This, in turn, saves time, allowing data scientists to spend less time on configuring chart aesthetics, and more on actually creating charts. Consistent data formatting Another common problem faced by data scientists is that different plotting methods need different input data formats, requiring users to completely reformat their input data. This leads to data scientists spending a lot of time manipulating data frames into the right state for their charts. Chartify’s consistent input data formatting allows you to quickly create and iterate on charts since less time is spent on data munging. Chartify Other features Since a majority of the problems could be solved by just a few chart types, Chartify focuses mainly on these use cases and comes with a complete example notebook that presents the full list of chart types that Chartify is capable of generating. Moreover, adding color into charts greatly help simplify the charting process, which is why Chartify has different palette types aligned to the different use cases for color. Additionally, Chartify offers support for Bokeh, an interactive python library for data visualization, providing users the option to fall back on manipulating Chartify charts with Bokeh if they need more control. For more information, check out the official Chartify blog post. cstar: Spotify’s Cassandra orchestration tool is now open source! Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer 8 ways to improve your data visualizations
Read more
  • 0
  • 0
  • 20223
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-announces-the-general-availability-of-a-new-api-for-google-docs
Amrata Joshi
12 Feb 2019
2 min read
Save for later

Google announces the general availability of a new API for Google Docs

Amrata Joshi
12 Feb 2019
2 min read
Yesterday, Google announced the general availability of a new API for Google Docs that will help developers to automate their tasks that users manually do in the company’s online office suite. This API lets users read and write documents programmatically so that users can integrate data from various sources. Since Google Cloud Next 2018, this API has been in developer preview and is now available to all developers. This API lets users automate processes, create documentation in bulk and generate invoices or contracts. With this API, developers can set up processes that manipulate documents. It gives the ability to insert, move, delete, merge and format text, insert inline images and work with lists. Zapier, Netflix, Mailchimp and Final Draft are some of the companies that built solutions based on the new API during the preview period. Zapier integrated the Docs API into its workflow automation tool for helping users to create offer letters based on a template. Netflix used it to build an internal tool that allows its engineers to gather data and automate its documentation workflow. This API will help the users to regularly create similar documents with changing order numbers and line items based on information from third-party systems. The API’s import/export abilities help users for using the Docs for internal content management systems. Few users are happy with this news and excited to use the API. One of the users commented on HackerNews, “That is such great work. Getting the job done with the tools already around is just such a good feeling.” Whereas, few others think that it will take some time for Google to reach where Microsoft is now. Another comment reads, “They will have a lot of catchup to do to get where Office is now. I'm frankly amazed by how good Microsoft Flow has been.” Another user commented, “Microsoft Flow is a really powerful - in terms of advanced capabilities it offers.” To know more about this news, check out Google’s official post. Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’ Google’s Adiantum, a new encryption standard for lower-end phones and other smart devices
Read more
  • 0
  • 0
  • 20216

article-image-google-ai-introduces-snap-a-microkernel-approach-to-host-networking
Savia Lobo
29 Oct 2019
4 min read
Save for later

Google AI introduces Snap, a microkernel approach to ‘Host Networking’

Savia Lobo
29 Oct 2019
4 min read
A few days ago, the Google AI team introduced Snap, a microkernel-inspired approach to host networking at the 27th ACM Symposium on Operating Systems Principles. Snap is a userspace networking system with flexible modules that implement a range of network functions, including edge packet switching, virtualization for our cloud platform, traffic shaping policy enforcement, and a high-performance reliable messaging and RDMA-like service. The Google AI team says, “Snap has been running in production for over three years, supporting the extensible communication needs of several large and critical systems.” Why Snap? Prior to Snap, Google AI team says they were limited in their ability to develop and deploy new network functionality and performance optimizations in several ways. This is because developing kernel code was slow and drew on a smaller pool of software engineers. Second, feature release through the kernel module reloads covered only a subset of functionality and often required disconnecting applications, while the more common case of requiring a machine reboot necessitated draining the machine of running applications. Unlike prior microkernel systems, Snap benefits from multi-core hardware for fast IPC and does not require the entire system to adopt the approach wholesale, as it runs as a userspace process alongside our standard Linux distribution and kernel. Source: Snap Research paper Using Snap, the Google researchers also created a new communication stack called Pony Express that implements a custom reliable transport and communications API. Pony Express provides significant communication efficiency and latency advantages to Google applications, supporting use cases ranging from web search to storage. Features of the Snap userspace networking system Snap’s architecture comprises of recent ideas in userspace networking, in-service upgrades, centralized resource accounting, programmable packet processing, kernel-bypass RDMA functionality, and optimized co-design of transport, congestion control, and routing. With these, Snap: Enables a high rate of feature development with a microkernel-inspired approach of developing in userspace with transparent software upgrades. It also retains the benefits of centralized resource allocation and management capabilities of monolithic kernels and also improves upon accounting gaps with existing Linux-based systems. Implements a custom kernel packet injection driver and a custom CPU scheduler that enables interoperability without requiring the adoption of new application runtimes and while maintaining high performance across use cases that simultaneously require packet processing through both Snap and the Linux kernel networking stack. Encapsulates packet processing functions into composable units called “engines”, which enables both modular CPU scheduling as well as incremental and minimally disruptive state transfer during upgrades. Through Pony Express, it provides support for OSI layer 4 and 5 functionality through an interface similar to an RDMA-capable “smart” NIC. This enables transparently leveraging offload capabilities in emerging hardware NICs as a means to further improve server efficiency and throughput. Supports 3x better transport processing efficiency than the baseline Linux kernel and supporting RDMA-like functionality at speeds of 5M ops/sec/core. MicroQuanta: Snap’s new lightweight kernel scheduling class To dynamically scale CPU resources, Snap works in conjunction with a new lightweight kernel scheduling class called MicroQuanta. It provides a flexible way to share cores between latency-sensitive Snap engine tasks and other tasks, limiting the CPU share of latency-sensitive tasks and maintaining low scheduling latency at the same time. A MicroQuanta thread runs for a configurable runtime out of every period time units, with the remaining CPU time available to other CFS-scheduled tasks using a variation of a fair queuing algorithm for high and low priority tasks (rather than more traditional fixed time slots). MicroQuanta is a robust way for Snap to get priority on cores runnable by CFS tasks that avoid starvation of critical per-core kernel threads. While other Linux real-time scheduling classes use both per-CPU tick-based and global high-resolution timers for bandwidth control, MicroQuanta uses only per-CPU highresolution timers. This allows scalable time-slicing at microsecond granularity. Snap is being received positively by many in the community. https://twitter.com/copyconstruct/status/1188514635940421632 To know more about Snap in detail, you can read it’s complete research paper. Amazon announces improved VPC networking for AWS Lambda functions Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more
Read more
  • 0
  • 0
  • 20204

article-image-react-16-5-0-is-now-out-with-a-new-package-for-scheduling-support-for-devtools-and-more
Bhagyashree R
07 Sep 2018
3 min read
Save for later

React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!

Bhagyashree R
07 Sep 2018
3 min read
React announced its monthly release yesterday, React 16.5.0. In this release they have improved warning messages, added support for React DevTools Profiler in React DOM, and done some bug fixes. Updates in React A Dev warning is shown if React.forwardRef render function doesn't take exactly two arguments. A more improved message is shown if someone passes an element to createElement by mistake. The onRender function will be called after mutations and commitTime reflects pre-mutation time. Updates in React DOM New additions Support for React DevTools Profiler is added. The react-dom/profiling entry point is added for profiling in production. The onAuxClick event is added for browsers that support it. The movementX and movementY fields are added to mouse events. The tangentialPressure and twist fields are added to pointer events. Support for passing booleans to the focusable SVG attribute. Improvements Improved component stack for the folder/index.js naming convention. Improved warning when using getDerivedStateFromProps without initialized state. Improved invalid textarea usage warning. Electrons <webview> tag are now allowed without warnings. Bug fixes Fixed incorrect data in compositionend event when typing Korean on IE11. Avoid setting empty values on submit and reset buttons. The onSelect event not being triggered after drag and drop. The onClick event not working inside a portal on iOS. A performance issue when thousands of roots are re-rendered. gridArea will be treated as a unitless CSS property. The checked attribute is not getting initially set on the input. A crash when using dynamic children in the option tag. Updates in React DOM Server A crash is fixed that happens during server render in react 16.4.1 Fixes a crash when setTimeout is missing This release fixes a crash with nullish children when using dangerouslySetInnerHtml in a selected option. Updates in React Test Renderer and Test Utils A Jest-specific ReactTestUtils.mockComponent() helper is now deprecated. A warning is shown when a React DOM portal is passed to ReactTestRenderer. Improvements in TestUtils error messages for bad first argument. Updates in React ART Support for DevTools is added New package for scheduling (experimental) The ReactDOMFrameScheduling module will be pulled out in a separate package for cooperatively scheduling work in a browser environment. It's used by React internally, but its public API is not finalized yet. To see the complete list of updates in React 16.5.0, head over to their GitHub repository. React Next React Native 0.57 coming soon with new iOS WebViews Implementing React Component Lifecycle methods [Tutorial] Understanding functional reactive programming in Scala [Tutorial]
Read more
  • 0
  • 0
  • 20158

article-image-dr-brandon-explains-transfer-learning
Shoaib Dabir
15 Nov 2017
5 min read
Save for later

Dr. Brandon explains 'Transfer Learning' to Jon

Shoaib Dabir
15 Nov 2017
5 min read
[box type="shadow" align="" class="" width=""]Dr. Brandon: Hello and welcome to another episode of 'Date with Data Science'. Today we are going to talk about a topic that is all the rage these days in the data science community: Transfer Learning.  Jon: 'Transfer learning' sounds all sci-fi to me. Is it like the thing that Prof. X does in X-men reading other people's minds using that dome-like headset thing in his chamber? Dr. Brandon: If we are going to get X-men involved, what Prof. X does is closer to deep learning. We will talk about that another time. Transfer learning is simpler to explain. It's what you actually do everytime you get into some character, Jon.  Say, you are given the role of  Jack Sparrow to play. You will probably read a lot about pirates, watch a lot of pirate movies and even Jonny Depp in character and form your own version of Jack Sparrow. Now after that acting assignment is over, say you are given the opportunity to audition for the role of Captain Hook, the famous pirate from Peter Pan. You won't do your research from ground zero this time. You will retain general mannerisms of a Pirate you learned from your previous role, but will only learn the nuances of Captain Hook, like acting one-handed. Jon: That's pretty cool! So you say machines can also learn this way? Dr.Brandon: Of course, that's what transfer learning is all about: learn something, abstract the learning sufficiently, then apply it to another related problem. The following is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks.[/box] Pre-trained models are not optimized for tackling user specific datasets, but they are extremely useful for the task at hand that has similarity with the trained model task. For example, a popular model, InceptionV3, is optimized for classifying images on a broad set of 1000 categories, but our domain might be to classify some dog breeds. A well-known technique used in deep learning that adapts an existing trained model for a similar task to the task at hand is known as Transfer Learning. And this is why Transfer Learning has gained a lot of popularity among deep learning practitioners and in recent years has become the go-to technique in many real-life use cases. It is all about transferring knowledge (or features) among related domain. Purpose of Transfer Learning Let say you have trained a deep neural network to differentiate between fresh mango and rotten mango. During training, the network requires thousands of rotten and fresh mango images and hours of training to learn knowledge like if any fruit is rotten, a liquid will ooze out of the fruit and it produce a bad odor. Now with this training experience the network, can be used for different task/use-case to differentiate between a rotten apple and fresh apple using the knowledge of rotten features learned during training of mango images. The general approach of Transfer Learning is to train a base network and then copy its first n layers to the first n layers of a target network. The remaining layers of the target network are initialized randomly and trained toward the targeted use-case. The main scenarios for using Transfer Learning in your deep learning workflow are as follows: Smaller datasets: When you have a smaller dataset, building a deep learning model from scratch won't work well. Transfer Learning provides the way to apply a pre-trained model to new classes of data. Let's say a pre-trained model built from one million images of ImageNet data will converge to a decent solution (after training on just a fraction of the available smaller training data, for example, CIFAR-10) compared to a deep learning model built with a smaller dataset from scratch. Less resource: Deep learning process (such as convolution) requires a significant amount of resource and time. Deep learning process are well suited to run on high graded GPU-based machines. But with pre-trained models, you can easily train across a full training set (let's say 50000 images) in less than a minute using your laptop/notebook without GPU, since the majority of time a model is modified in the final layer with a simple update of just a classifier or regressor. Various approaches of using pre-trained models Using pre-trained architecture: Instead of transferring weights of the trained model, we can only use the architecture and initialize our own random weights to our new dataset. Feature extractor: A pre-trained model can be used as a feature extraction mechanism just by simply removing the output layer of the network (that gives the probabilities for being in each of the n classes) and then freezing all the previous layers of the network as a fixed feature extractor for the new dataset. Partially freezing the network: Instead of replacing only the final layer and extracting features from all previous layers, sometime we might train our new model partially (that is, to keep the weights of initial layers of the network frozen while retraining only the higher layers). Choice of the number of frozen layers can be considered as one more hyper-parameter. Next, read about how transfer learning is being used in the real world. If you enjoyed the above excerpt, do check out the book it is from.
Read more
  • 0
  • 0
  • 20138
article-image-the-angular-7-2-1-cli-release-fixes-a-webpack-dev-server-vulnerability-supports-typescript-3-2-and-angular-7-2-0-rc-0
Bhagyashree R
10 Jan 2019
2 min read
Save for later

The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0

Bhagyashree R
10 Jan 2019
2 min read
Today, Minko Gechev, an engineer in the Angular team at Google announced the release of Angular CLI 7.2.1. This release fixes a webpack-dev-server vulnerability and also comes with support for multiselect list prompt, TypeScript 3.2, and Angular 7.2.0-rc.0. https://twitter.com/mgechev/status/1083133079579897856 Understanding the webpack-dev-server vulnerability The npm install command was showing the Missing Origin Validation vulnerability because webpack-dev-server versions before 3.1.10 are missing origin validation on the websocket server. A remote attacker can take advantage of this vulnerability to steal a developer’s code as the origin of requests to the websocket server, which is used for Hot Module Replacement (HMR) are not validated. Other updates in Angular 7.2.1 CLI Several updates and bug fixes were listed in the release notes of Angular CLI’s GitHub repository. Some of them are: Support is added for multiselect list prompt Support is added for TypeScript 3.2 and Angular 7.2.0-rc.0 Optimization options are updated Warnings are added for overriding flags in arguments lintFix is added to several other schematics `resourcesOutputPath` is added to the schema to define where style resources will be placed, relative to outputPath. The architect command project parsing is improved Prompt support is added using Inquirer Jobs API is added Directly loading component templates is supported Angular 7 is now stable Unit testing Angular components and classes [Tutorial] Setting up Jasmine for Unit Testing in Angular [Tutorial]
Read more
  • 0
  • 0
  • 20137

article-image-mozilla-removes-avast-and-avg-extensions-from-firefox-to-secure-user-data
Fatema Patrawala
05 Dec 2019
4 min read
Save for later

Mozilla removes Avast and AVG extensions from Firefox to secure user data

Fatema Patrawala
05 Dec 2019
4 min read
Yesterday Wladimir Palant, the creator of AdBlock Plus, reported that Mozilla removed four Firefox extensions made by Avast and its subsidiary AVG. Palant also found credible reports about the extensions harvesting user data and browsing histories. The four extensions are Avast Online Security, AVG Online Security, Avast SafePrice, and AVG SafePrice. The first two are extensions that show warnings when navigating to known malicious or suspicious sites, while the last two are extensions for online shoppers, showing price comparisons, deals, and available coupons. Avast and AVG extensions were caught in October Mozilla removed the four extensions from its add-ons portal after receiving a report from Palant. Palant analyzed the Avast Online Security and AVG Online Security extensions in late October and found that the two were collecting much more data than they needed to work -- including detailed user browsing history, a practice prohibited by both Mozilla and Google. He published a blog post on October 28, detailing his findings, but in a blog post dated today, he says he found the same behavior in the Avast and AVG SafePrice extensions as well. On his original blog post Mozilla did not intervene to take down the extensions. Palant reported about it again to Mozilla developers yesterday and they removed all four add-ons within 24 hours. “The Avast Online Security extension is a security tool that protects users online, including from infected websites and phishing attacks,” an Avast spokesperson told ZDNet. “It is necessary for this service to collect the URL history to deliver its expected functionality. Avast does this without collecting or storing a user's identification.” “We have already implemented some of Mozilla's new requirements and will release further updated versions that are fully compliant and transparent per the new requirements,” the Avast spokesperson said. “These will be available as usual on the Mozilla store in the near future.” Extensions still available on Chrome browser The four extensions are still available on the Chrome Web Store according to Palant. "The only official way to report an extension here is the 'report abuse' link," he writes. "I used that one of course, but previous experience shows that it never has any effect. "Extensions have only ever been removed from the Chrome Web Store after considerable news coverage," he added. On Hacker News, users discussed Avast extensions creepily trick browsers to inspect tls/ssl packets. One on the users commented, “Avast even does some browser trickery to then be able to inspect tls/ssl packets. Not sure how I noticed that on a windows machine, but the owner was glad to uninstall it. As said on other comments, the built-in windows 10 defender AV is the least evil software to have enabled for somewhat a protected endpoint. The situation is desperate for AV publishers, they treat customers like sheep, the parallel with mafia ain't too far possible to make. It sorts of reminds me 20 years back when it was common discussion to have on how AV publishers first deployed a number of viruses to create a market. The war for a decent form of cyber security and privacy is being lost. It's getting worse every year. More money (billions) is poured into it. To no avail. I think we got to seriously show the example and reject closed source solutions all together, stay away from centralized providers, question everything we consume. The crowd will eventually follow.” Mozilla’s sponsored security audit finds a critical vulnerability in the tmux integration feature of iTerm2 Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020 Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol
Read more
  • 0
  • 0
  • 20129

article-image-tableau-foundation-partners-reflect-on-2020-and-data-for-impact-from-whats-new
Anonymous
28 Dec 2020
10 min read
Save for later

Tableau Foundation partners reflect on 2020 and data for impact from What's New

Anonymous
28 Dec 2020
10 min read
Neal Myrick Global Head of the Tableau Foundation Kristin Adderson December 28, 2020 - 9:59pm December 27, 2020 Addressing a global pandemic and economic crisis while also driving for change in other areas—from racial inequity to equitable education to hunger—is a monumental challenge. We are lucky to have amazing nonprofit partners tackling these issues. We took a moment to check-in with some to hear how the year has shaped—and reshaped—their approach to solving some of the world’s most pressing issues.  Driving for racial equity and justice amid the pandemic: PolicyLink “2020 was tragic and heart-opening for racial equity,” says Josh Kirschenbaum, Chief Operating Officer of the racial equity research and action institute PolicyLink. COVID-19 exposed racial disparities in health and access to care, and the murder of George Floyd and the protests that followed showed how far the country has to go to address them. “We have to step into this opening and move into an era of reckoning and acceleration around equity and justice that we’ve never seen before,” he says. Over the past year, PolicyLink helped drive the conversation around the need for equity-based solutions to COVID-19 with a comprehensive plan and set of policy priorities for pandemic response. They also released a weekly publication called COVID, Race, and the Revolution. “It’s critical to connect our data and policy proposals with narrative and communications,” Kirschenbaum says.  PolicyLink has also worked to draw attention to the broader racial inequity crisis in the U.S. This summer, they released their Racial Equity Index, a data tool to measure the state of equity in the largest 100 metros across the U.S. They also released a report outlining racial disparities in the workforce during the pandemic and were a founding partner in WeMustCount.org, an effort to push for COVID-19 data disaggregated by race. In 2021, PolicyLink wants to transform the energy and data around racial disparities in the U.S. into structural change. “We are no longer at the level of just doing project-based work,” Kirschenbaum says. “This is the time to lead with transformative solidary, focus on equity, and really redesign the nation.” Combatting increasing hunger: Feeding America and World Food Programme Image credit: Feeding AmericaCOVID-19 is a multi-level crisis. Our partners at Feeding America and the World Food Programme have seen firsthand how the pandemic has affected hunger in the U.S. and the world—and they’re working to respond to it.  “There’s been a perfect storm of increased demand, declines in donations of food, and disruptions to the charitable food assistance system’s operating model,” says Christine Feiner, Feeding America’s director of corporate partnerships. The organization estimates that progress made against food insecurity in the U.S.—which before was at the lowest it had been in 20 years—will be wiped out due to COVID-19. Over the last year, Feeding America saw demand increase 60% across its network of 200 food banks. Feeding America has relied on data to guide the organization through the pandemic. They launched a survey to understand demand and challenges across their member food banks. “That allowed us to have a real-time view into what food banks were seeing on the ground so we could property support them and connect them to additional resources,” Feiner says. Feeding America has also used this data to push for policy change at the federal level to help people at risk of hunger—work they plan to continue next year.  The United Nations World Food Programme—which became a Nobel Peace Prize Laureate this year—has been contending with increased need globally. “Roughly a quarter of a billion people—especially the already poor—are expected to have experienced food insecurity this year, largely driven by the loss of jobs, remittances, and purchasing power. Already poor and food insecure populations are disproportionately affected,” says Pierre Guillaume Wielezynski, the digital transformation services chief at WFP.  With the pandemic limiting WFP’s ability to work directly in communities and deliver aid, they’ve been able to use data and technology to reach people in need. In Tableau, they built a shipping and logistics platform for the entire humanitarian sector to manage and track aid deliveries in real-time. And they’ve been able to analyze data from technologies like text messaging and chatbots to get a picture of needs on the ground and ensure they’re responding most helpfully and efficiently.  Next year, WFP will continue to focus on delivering aid in communities while pushing for policy change, Wielezynski says. “Our presence in over 80 countries gives us a unique position to help advise our government partners on solutions to hunger and food insecurity,” he says.  Keeping the spotlight on homelessness: Community Solutions Image credit: Community SolutionsSince the beginning of the pandemic, the spotlight has been on frontline workers: healthcare professionals, post office workers, grocery store clerks. “What a lot of people didn’t really recognize initially was that homeless service providers are also on the frontline of protecting a population that is especially vulnerable to COVID-19,” says Anna Kim, communications lead for Community Solutions.  Communities and agencies that work with Community Solutions through their Built for Zero initiative—a data-driven program to bring about a functional end to homelessness—had to expand from the already-steep task of doing homeless response work to emergency pandemic response. “They needed to figure out how to get masks and PPE, and how to make shelters safe,” Kim says. But communities that have already been collecting detailed, person-specific data on their homeless population through Built for Zero found that same data to be critical in responding to COVID-19. Communities like Jacksonville, Florida, were able to use their by-name list of people experiencing homelessness to conduct wide-spread testing and keep people safe.  Throughout the pandemic, Community Solutions has elevated the importance of addressing homelessness as both a public health and racial equity imperative. “The raised public consciousness around racial equity after the murder of George Floyd has also heightened the importance of understanding how homelessness has always disproportionately impacted Black and Native populations,” Kim says. “We’ve been able to raise awareness of the need to invest even further in addressing these disparities and ending homelessness.” Community Solutions was recently named a finalist in the prestigious MacArthur Foundation 100&Change competition for their exceptional work. Next year, they hope to expand partnerships with cities across the U.S. to continue driving for an end to homelessness—even in the face of enormous health and economic challenges. Addressing growing education equity gaps: Equal Opportunity Schools As COVID-19 has forced schools to close and learning to go remote, equity divides among students have grown even more pronounced. “We talk about how COVID-19 is exacerbating inequities and pre-existing conditions in health, but it’s also true in education,” says Sasha Rabkin, chief strategy officer for Equal Opportunity Schools, an organization focused on closing equity gaps in education. “And inequity is a pre-existing condition.” EOS has built data tools for schools and districts to understand inequities and how they play out along racial lines. Through the surveys they conducted twice in 2020, EOS found that over 75% of students say that they are struggling with motivation–particularly with balancing coursework with the desire to have deep conversations about what’s happening globally with COVID, racial injustice, and political movements. “For educators to be able to hear that is invaluable,” Rabkin says. What’s on the mind of EOS and the educators they work with is how they can more genuinely meet students where they are and construct learning environments that respond to the current moment and bring students along. “Can we start to think about measuring and understanding and engaging with what matters, instead of continuing with the status quo? Schools look a lot like they did 20 years ago. Can we make this a moment to think critically about what we could be doing differently?” Supporting access to sanitation and hand-washing infrastructure: Splash A Splash handwashing station (Image credit: Make Beautiful)As a nonprofit, Splash focuses on providing handwashing, hygiene, and sanitation infrastructure to kids in schools and orphanages in cities throughout the Global South. During the pandemic, says Laura Mapp, Director of Business Development at Splash, their work has become even more essential and complicated. “At the beginning of the pandemic, we engaged in direct COVID relief with our government partners in Ethiopia,” Mapp says. In Addis Ababa, three of the schools where Splash had previously installed hand-washing stations and sanitation infrastructure became quarantine centers, where people who suspected they had the virus could safely quarantine away from their families. Splash also partnered with the Bureau of Health in Addis Ababa to bring their handwashing stations to six hospitals across the city. They’ve been able to install sanitation infrastructure in schools while children are learning remotely. Students learning from home, Mapp says, spurred Splash to innovate on ways to reach them virtually with messaging about the importance of handwashing and information about menstrual health, especially for girls. “This is helping us forge some new partnerships to enable the delivery of these tools, particularly in India, where mobile and computer usage is more accessible,” Mapp says. For instance, they’re partnering with a platform called Oky, designed by Unicef, that young girls can use to get answers about menstrual health questions. While the pandemic continues to pose significant challenges in the communities where Splash works, Mapp is hopeful that the increased attention on the need for good sanitation infrastructure and communication around hygiene best practices will help keep people safe through and beyond the pandemic. Pivoting a successful social enterprise to meet community needs: FareStart Image Credit: FareStartAs soon as COVID-19 began forcing lockdowns in cities across the U.S., FareStart knew it would have to pivot its operations. The social enterprise manages a handful of restaurants and cafes across Seattle, where people facing barriers to employment—from homelessness to drug-use history—gain training and experience in the workforce. With restaurants shuttering and in-person work discouraged, FareStart’s programs could not continue as normal.  Almost immediately, FareStart started using its restaurants and kitchens to prepare individual meals to deliver to the most vulnerable. FareStart now is serving over 50,000 individual meals per week, distributed across over 100 sites, says Erika Van Merr, FareStart’s Associate Director of Philanthropy.  Managing this broad distribution operation and network, Van Merr says, has required more data than FareStart ever used before. They’ve been using external data to understand the COVID-19 situation in their community while inputting data daily to track meals: preparation location, for what organization, and where they were distributed. “We really had to up our data savviness to make decisions about how to operate daily,” Van Merr says. They plan to continue using data to expand their community meals program even after the pandemic is over. While the organization has launched virtual training programs, it looks forward to bringing back the students in person and reopening its restaurant and cafes. “When people ask what our plans look like for next year, I tell them that we will continue to provide hunger relief for our community’s most vulnerable neighbors,” says Van Merr.  To learn more about Tableau Foundation and its partners, visit tableau.com/foundation.
Read more
  • 0
  • 0
  • 20127
article-image-microsoft-cloud-services-gdpr
Vijin Boricha
25 Apr 2018
2 min read
Save for later

Microsoft Cloud Services get GDPR Enhancements

Vijin Boricha
25 Apr 2018
2 min read
With the GDPR deadline looming closer everyday, Microsoft has started to apply General Data Protection Regulation (GDPR) to its cloud services. Microsoft recently announced that they are providing some enhancements to help organizations using Azure and Office 365 services meet GDPR requirements. With these improvements they aim at ensuring that both Microsoft's services and the organizations benefiting from them will be GDPR-compliant by the law's enforcement date. Microsoft tools supporting GDPR compliance are as follows: Service Trust Portal, provides GDPR information resources Security and Compliance Center in the Office 365 Admin Center Office 365 Advanced Data Governance for classifying data Azure Information Protection for tracking and revoking documents Compliance Manager for keeping track of regulatory compliance Azure Active Directory Terms of Use for obtaining user informed consent Microsoft recently released a preview of a new Data Subject Access Request interface in the Security and Compliance Center and the Azure Portal via a new tab. According to Microsoft 365 team, this interface is also available in the Service Trust Portal. Microsoft Tech Community post also claims that the portal will be getting a "Data Protection Impacts Assessments" section in the coming weeks. Organizations can now perform a search for "relevant data across Office 365 locations" with the new Data Subject Access Request interface preview. This helps organizations search across Exchange, SharePoint, OneDrive, Groups and Microsoft Teams. As explained by Microsoft, once searched the data is exported for review prior to being transferred to the requestor. According to Microsoft, the Data Subject Access Request capabilities will be out of preview before the GDPR deadline of May 25th. It also claims that IT professionals will be able to execute DSRs (Data Subject Requests) against system-generated logs. To know more in detail you can visit Microsoft’s blog post.
Read more
  • 0
  • 0
  • 20125

article-image-google-project-zero-reveals-an-imessage-bug-that-bricks-iphone-causing-repetitive-crash-and-respawn-operations
Savia Lobo
08 Jul 2019
3 min read
Save for later

Google Project Zero reveals an iMessage bug that bricks iPhone causing repetitive crash and respawn operations

Savia Lobo
08 Jul 2019
3 min read
A zero-day vulnerability in Apple's iMessage, which bricks an iPhone and survives hard resets was recently brought to light. A specific type of malformed message is sent out to a victim device, forcing users to factory-reset it again. The issue was first posted by Google Project Zero researcher, Natalie Silvanovich on the project’s issue page on April 19, 2019. Due to the usual 90-day disclosure deadline, the bug is held from public view until either 90 days had elapsed or a patch had been made broadly available to the public. On 4th July, Silvanovich revealed that the issue was fixed in the Apple iOS 12.3 update, thus making it public. Labelled as CVE-2019-8573 and CVE-2019-8664, this vulnerability causes a Mac to crash and respawn. Silvanovich says on an iPhone, this code is in Springboard and “receiving this message will cause Springboard to crash and respawn repeatedly, causing the UI not to be displayed and the phone to stop responding to input. The only way I could find to fix the phone is to reboot into recovery mode and do a restore. This causes the data on the device to be lost”. According to Forbes, “The message contains a property with a key value that is not a string, despite one being expected. Calling a method titled IMBalloonPluginDataSource _summaryText, the method assumes the key in question is a string but does not verify it is the case”.  The subsequent call for IMBalloonPluginDataSource replaceHandlewithContactNameInString calls for im_handleIdentifiers for the supposed string, which in turn results in a thrown exception.  For testing purposes, Silvanovich, in her patch update has shared three ways that she found to unbrick the device: wipe the device with 'Find my iPhone' put the device in recovery mode and update via iTunes (note that this will force an update to the latest version) remove the SIM card and go out of Wifi range and wipe the device in the menu Google Project Zero has also released instructions to reproduce the issue: install frida (pip3 install frida) open sendMessage.py, and replace the sample receiver with the phone number or email of the target device in the local directory, run: python3 sendMessage.py Users should make sure their iPhone is up to date with the latest iOS 12.3 update. Read more about the vulnerability on Google Project Zero’s issue page. Approx. 250 public network users affected during Stack Overflow's security attack Google researcher reveals an unpatched bug in Windows’ cryptographic library that can quickly “take down a windows fleet” All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night
Read more
  • 0
  • 0
  • 20102
Modal Close icon
Modal Close icon