Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-microsoft-will-not-support-windows-registry-backup-by-default-to-reduce-disk-footprint-size-from-windows-10-onwards
Vincy Davis
02 Jul 2019
3 min read
Save for later

Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards

Vincy Davis
02 Jul 2019
3 min read
After the release of Windows 10 in October 2018, it was speculated that Windows 10 might have a bug which is preventing the successful execution of the registry backup task, usually enabled by default on PCs running the operating system. After eight months, Microsoft has now come back with an answer to this speculation, by stating that it was not a bug but a change in “design” that prevented the execution of registry backups. All along these eight months, users were not notified about this change in feature by Microsoft. Around 800M Windows 10 users would have lost their data, if by any chance, the Windows System Restore point would have failed. Last week, Microsoft released a support document stating Windows 10 version 1803 onwards, Windows will no longer back the system registry to the RegBack folder, by default. Also it has been said that this change is “intended to help reduce the overall disk footprint size of Windows.” If browsed through the Windows\System32\config\RegBack folder, all registry hives are still present, however with each having 0kb file size. Registry backups are extremely important for users as they are the only option available, if the Windows System Restore point fails. How to manually switch back automatic registry backups Though Windows will not support registry backups by default, Microsoft has not entirely  removed the feature. Users can still create registry backups automatically by using a system restore point. Windows 10 users can change the new default behavior using the following steps: First configure a new  REG_DWORD registry entry at HKLM\System\CurrentControlSet\Control\Session Manager\Configuration Manager\EnablePeriodicBackup. Assign it to value 1. After restarting the system, Windows will back up the registry to the RegBack folder. A RegIdleBackup task will be created to manage subsequent backups. Windows will store the task information in the Scheduled Task Library, in Microsoft\Windows\Registry folder. The task has the following properties: Image Source: Microsoft Document Users are skeptical that Microsoft has removed registry backups, for saving disk footprint space. A user on Hacker News comments that, “50-100MB seems like a miniscule amount of space to warrant something like this. My WinSxS folder alone is almost 10GB. If they wanted to save space, even a modest improvement in managing updates would yield space saving results orders of magnitude greater than this.” Another user adds, “Of all the stuff crammed automatically on Windows 10 install .. they can't be serious about saving space.” Another user wrote that, “This sort of thinking might have been understandable back during the '90's. However, today, people have plenty of free space on their hard disk. The track record of Windows 10 has been so poor lately that it's surprising that MS got so overconfident that they decided that they didn't need safeguards like this any longer.” Read the Microsoft support document for more details. Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months
Read more
  • 0
  • 0
  • 13301

article-image-could-apples-latest-acquisition-yesterday-of-an-ar-lens-maker-signal-its-big-plans-for-its-secret-apple-car
Savia Lobo
30 Aug 2018
2 min read
Save for later

Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car?

Savia Lobo
30 Aug 2018
2 min read
Yesterday, Apple Inc. announced the acquisition of Akonia Holographics, a startup focused on making lenses for augmented reality glasses, which may launch sometime in the year 2020. Akonia Holographics was founded in 2012 by a group of holography scientists and had originally focused on holographic data storage before shifting its efforts to creating displays for augmented reality glasses. Why did Apple acquire Akonia Holographics? With this acquisition, Apple aims to create a wearable device that has the potential to superimpose digital information in the real world via a thin lens. Akonia’s display technology allows for thin, transparent smart glass lenses that display vibrant, full-color, wide field-of-view images. This can help Apple in reaching its ambition of superimposing digital information. Akonia has a portfolio of more than 200 patents related to holographic systems and materials, according to its website. As reported by Reuters, The Akonia acquisition is a clear indication of how Apple might handle one of the most daunting challenges in augmented reality hardware, which is producing crystal clear optical displays that are thin and light enough to fit into glasses similar to everyday frames with images bright enough for outdoor use and suited to mass manufacturing at a relatively low price. In 2013, Apple acquired a small Israeli firm called PrimeSense that made three-dimensional sensors. The iPhone X, launched last year, uses a similar sensor to power facial recognition features. Similarly, Akonia’s acquisition might also result in a new AR lens update in one of its upcoming releases in its devices. Ming-Chi Kuo, a former Apple Inc. analyst said that the Apple Car will launch sometime in 2023-2025. The iCar project is codenamed as 'Titan', according to The Wall Street Journal. Kuo said, “Apple’s leading technology advantages (e.g. AR) would redefine cars and differentiate Apple Car from peers’ products”. Apple acquiring Akino could also be part of including the latest AR tech within the iCar initiative and scale forward the timelines. Apple bans Facebook’s VPN app from the App Store for violating its data collection rules Stack skills, not degrees: Industry-leading companies, Google, IBM, Apple no longer require degrees 16 year old hacked into Apple’s servers, accessed ‘extremely secure’ customer accounts for over a year undetected
Read more
  • 0
  • 0
  • 13299

Matthew Emerick
19 Oct 2020
1 min read
Save for later

Apple Entrepreneur Camp applications open for Black founders and developers from News - Apple Developer

Matthew Emerick
19 Oct 2020
1 min read
Apple Entrepreneur Camp supports underrepresented founders and developers as they build the next generation of cutting-edge apps and helps form a global network that encourages the pipeline and longevity of these entrepreneurs in technology. Applications are now open for the first cohort for Black founders and developers, which runs online from February 16 to 25, 2021. Attendees receive code-level guidance, mentorship, and inspiration with unprecedented access to Apple engineers and leaders. Applications close on November 20, 2020. Learn more about Apple Entrepreneur Camp Learn about some of our inspiring alumni
Read more
  • 0
  • 0
  • 13296

article-image-bloomberg-says-google-mastercard-covertly-track-customers-offline-retail-habits-via-a-secret-million-dollar-ad-deal
Melisha Dsouza
31 Aug 2018
3 min read
Save for later

Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal

Melisha Dsouza
31 Aug 2018
3 min read
Google and Mastercard have apparently signed a deal that was kept as a secret from most of the two billion Mastercard holders. The deal allows Google to track users’ offline buying habits. The search engine giant has been stalking offline purchases made in stores through Mastercard purchase histories and correlating them with online ad interactions. Both companies haven’t released an official statement about the partnership to the public about the arrangement. In May 2017, Google announced a service called “Store Sales Measurement”, which recorded about 70 percent of US credit and debit card transactions through third-party partnerships.  Selected Google advertisers had access to this new tool, which tracked whether the ads they ran online led to a sale at a physical store in the U.S. As reported by Bloomberg, an anonymous source familiar to the deal stated that Mastercard also provided Google with customers’ transaction data thus contributing to the 70% share. It’s highly probable that other credit card companies, also contribute the data of their customer transactions Advertisers spend lavishly on Google to gain valuable insights into the link between digital ads, a website visit or an online purchase. This supports the speculations that the deal is profitable for Google. How do they track how you shop? A customer logs into his/her Google account on the web and clicks on any Google ad.  They may often browse a certain item without purchasing it right away. Within 30 days, if he/she uses their MasterCard to buy the same item in a physical store, Google will send the advertiser a report about the product and the effectiveness of its ads, along with a section for “offline revenue” letting the advertiser know the retail sales. All of this raises the question on how much does Google actually know about your personal details? Both Google and Mastercard have clarified to The Verge that the data is anonymized in order to protect personally identifiable information. However, Google declined to confirm the deal with Mastercard. A Google spokesperson released a statement  to MailOnline saying: "Before we launched this beta product last year, we built a new, double-blind encryption technology that prevents both Google and our partners from viewing our respective users’ personally identifiable information. We do not have access to any personal information from our partners’ credit and debit cards, nor do we share any personal information with our partners. Google users can opt-out with their Web and App Activity controls, at any time.” This new controversy closely follows the heels of an earlier debacle last week when it was discovered that Google is providing advertisers with location history data collated from Google Maps and other more granular data points collected by its Android operating system. But this data never helped in understanding whether a customer actually purchased a product. Toggling off "Web and App Activity"  (enabled by default) will help in turning this feature off. The category also controls whether Google can pinpoint your exact GPS coordinates through Maps data and browser searches and whether it can crosscheck a customer's offline purchases with their online ad-related activity. Read more in-depth coverage on this news first reported at Bloomberg. Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns Google Titan Security key with secure FIDO two factor authentication is now available for purchase  
Read more
  • 0
  • 0
  • 13295

article-image-firefox-nightlys-secure-dns-experimental-results-out
Fatema Patrawala
30 Aug 2018
4 min read
Save for later

Firefox Nightly’s Secure DNS Experimental Results out

Fatema Patrawala
30 Aug 2018
4 min read
During July 2018, a planned Firefox Nightly experiment was performed involving secure DNS via the DNS over HTTPS (DoH) protocol. About 25,000 Firefox Nightly 63 users had agreed to be part of Nightly experiments and participated in this study. Cloudflare operated the DoH servers that were used according to the privacy policy they had agreed to with Mozilla. Each user was additionally given information directly in the browser about the project. That information included the service provider, and an opportunity to decline participation in the study. Browser users are currently experiencing spying and spoofing of their DNS information due to reliance on the unsecured traditional DNS protocol. Using a trusted DoH cloud based service in place of traditional DNS is a significant change in how networking operates and it raises many things to consider as we go forward when selecting servers. However, the initial experiment focused on validating two separate important technical questions: Does the use of a cloud DNS service perform well enough to replace traditional DNS? Does the use of a cloud DNS service create additional connection errors? The experiment is now complete and here are the finding highlights: The HTTPS with a cloud service provider shows a minor performance impact on the majority of non-cached DNS queries as compared to traditional DNS. Most queries were around 6 milliseconds slower, which seems to be an acceptable cost for the benefit of securing the data. However, the slowest DNS transactions performed much better with the new DoH based system than the traditional one – sometimes hundreds of milliseconds better. Source: Firefox Nightly The above chart shows the net improvement of the DoH performance distribution vs the traditional DNS performance distribution. The fastest DNS exchanges are at the left of the chart and the slowest at the right. The slowest 20% of DNS exchanges are radically improved (improvements of several seconds are truncated for chart formatting reasons at the extreme), while the majority of exchanges exhibit a small tolerable amount of overhead when using a cloud service. It shows a good result. The Firefox team hypothesized the improvements at the tail of the distribution derived from 2 advantages DoH provides compared to traditional DNS. First, the consistency of the service operation – when dealing with thousands of different operating system that are overloaded, unmaintained, or forwarded to strange locations. Second, HTTP’s use of modern loss recovery and congestion control allow it to better operate on very busy or low quality networks. The experiment also considered connection error rates and found that users using the DoH cloud service in ‘soft-fail’ mode experienced no statistically significant different rate of connection errors than users in a control group using traditional DNS. Soft-fail mode primarily uses DoH, but it will fallback to traditional DNS when a name does not resolve correctly or when a connection to the DoH provided address fails. The connection error rate measures whether an HTTP channel can be successfully established from a name and therefore incorporates the fallbacks into its measurements. These fallbacks are needed to ensure seamless operation in the presence of firewalled services and captive portals. “We’re committed long term to building a larger ecosystem of trusted DoH providers that live up to a high standard of data handling. We’re also working on privacy preserving ways of dividing the DNS transactions between a set of providers, and/or partnering with servers geographically. Future experiments will likely reflect this work as we continue to move towards a future with secured DNS deployed for all of our users.” says the Firefox Nightly team. Mozilla’s new Firefox DNS security updates spark privacy hue and cry Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature Firefox has made a password manager for your iPhone
Read more
  • 0
  • 0
  • 13293

article-image-google-open-sources-their-differential-privacy-library-to-help-protect-users-private-data
Vincy Davis
06 Sep 2019
5 min read
Save for later

Google open sources their differential privacy library to help protect user’s private data

Vincy Davis
06 Sep 2019
5 min read
Yesterday, tending on the importance of strong privacy protections in firms, Google open-sourced a differential privacy library which is used by them in their core products. Their approach is an end-to-end implementation of differentially private query engine and is generic and scalable. Basically, developers can use this library to build tools that can work with aggregate data without revealing personally identifiable information. According to Miguel Guevara, the product manager of privacy and data protection at Google, “Differentially-private data analysis is used by an organization to sort through the majority of their data and safeguard them in such a way that no individual’s data is distinguished or re-identified. This approach can be used for various purposes like focusing on features that can be particularly difficult to execute from scratch.” Google differential privacy library differentiates private aggregations on databases, even when individuals can each be associated with arbitrarily many rows. The company has been using the differential privacy algorithm to create supportive features like “how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi” says Guevara in the official blog post. Google researchers have published their findings in a research paper. The paper describes a  C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information. The researchers have also provided a stochastic tester to check the correctness of the algorithms. One of the researchers explains the motive behind this library on Twitter. He says, “The main focus of the paper is to explain how to protect *users* with differential privacy, as opposed to individual records. So much of the existing literature implicitly assumes that each user is associated to only one record. It's rarely true in practice!” Key features of the differential privacy library Statistical functions: The library can be used by developers to compute Count, Sum, Mean, Variance, Standard deviation, and Order statistics (including min, max, and median). Rigorous testing: The differential privacy library includes a manual and extensible stochastic testing. The stochastic framework produces a database depending on the result of differential privacy. It contains four components such as database generation, search procedure, output generation, and predicate verification. The researchers have open-sourced the ‘Stochastic Differential Privacy Model Checker library’ for reproducibility. Ready to use: The differential privacy library uses the common Structured Query Language (SQL) extension which can capture most data analysis tasks based on aggregations. Modular: The differential privacy library has been extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management. It can also be extended to handle end-to-end user-level differential privacy testing. How does the differentially private SQL work with bounded user contribution The Google researchers have implemented the differential privacy (DP) query engine on a collection of custom SQL aggregation operators and a query rewriter. The SQL engine tracks the user ID metadata to invoke the DP query rewriter and the query rewriter is used to perform anonymization semantics validation and enforcement. The query rewriter then classifies the queries into two steps. The first step validates the table subqueries and the second step samples the fixed number of partially aggregated rows for each user. This step assists in limiting the user contribution across partitions. Finally, the system computes a cross-user DP aggregation which contributes to each GROUP BY partition and limits the user contribution within partitions. The paper states, “Adjusting query semantics is necessary to ensure that, for each partition, the cross-user aggregations receive only one input row per user.” In this way, the developed differentially private SQL system captures most of the data analysis tasks using aggregations. The mechanisms implemented in the system uses a stochastic checker to prevent regression and increase the quality of privacy guaranteed. Though the algorithms presented in the paper are simple, the researchers maintain that based on the empirical evidence the approach is useful, robust and scalable. In the future, researchers are hoping to see usability studies to test the success of the methods. In addition, they see room for significant accuracy improvements, using Gaussian noise and better composition theorems. Many developers have appreciated that Google open-sourcedopen sourced its differential privacy library for others. https://twitter.com/_rickkelly/status/1169605755898515457 https://twitter.com/mattcutts/status/1169753461468086273 In contrast, many people on Hacker News are not impressed with Google’s initiative and feel that they are misleading users with this announcement. One of the comments read, “Fundamentally, Google's initiative on differential privacy is motivated by a desire to not lose data-based ad targeting while trying to hinder the real solution: Blocking data collection entirely and letting their business fail. In a world where Google is now hurting content creators and site owners more than it is helping them, I see no reason to help Google via differential privacy when outright blocking tracking data is a viable solution.” You can check out the differential privacy Github page and the research paper for more information on Google’s research. Latest Google News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Android 10 releases with gesture navigation, dark theme, smart reply, live captioning, privacy improvements and updates to security Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters
Read more
  • 0
  • 0
  • 13290
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-emotet-a-dangerous-botnet-spams-malicious-emails-targets-66000-unique-emails-for-more-than-30000-domain-names-reports-bleepingcomputer
Vincy Davis
19 Sep 2019
4 min read
Save for later

Emotet, a dangerous botnet spams malicious emails, “targets 66,000 unique emails for more than 30,000 domain names” reports BleepingComputer

Vincy Davis
19 Sep 2019
4 min read
Three days ago, Emotet, a dangerous malware botnet was found sending malicious emails to many countries around the globe. The maligned email with Emotet's signature was first spotted on the morning of September 18th in countries like Germany, the United Kingdom, Poland, Italy, and the U.S.A. by targeting their individuals, businesses, and government entities. This is not Emotet’s first outing, as it has been found to be used as a banking trojan in 2014. https://twitter.com/MalwareTechBlog/status/1173517787597172741 If any receiver of the infected mail unknowingly downloaded and executed it, they may have exposed themselves to the Emotet malware. Once infected, the computer is then added to the Emotet botnet which uses the particular computer as a downloader for other threats. The Emotet botnet was able to compromise many websites like customernoble.com, taxolabs.com, www.mutlukadinlarakademisi.com, and more. In a statement to BleepingComputer, security researchers from email security corp Cofense Labs said, “Emotet is now targeting almost 66,000 unique emails for more than 30,000 domain names from 385 unique top-level domains (TLDs).” The origin of the malicious emails are suspected to be from “3,362 different senders, whose credentials had been stolen. The count for the total number of unique domains reached 1,875, covering a little over 400 TLDs.” Brad Duncan, a security researcher also reported that some U.S.-based hosts received Trickbot, which is a banking trojan turned malware dropper. Trickbot is a secondary malware infection dropped by Emotet. https://twitter.com/malware_traffic/status/1173694224572792834 What did Emotet botnet do in its last outing? According to BleepingComputer, the Command and control (C2) servers for the Emotet botnet had got active in the beginning of June 2019 but did not send out any instructions to infected machines, until August 22. Presumably, the bot was taking time to rebuild themselves, establish new distribution channels and preparing for new spam campaigns. In short, it was under maintenance. Benkøw, a security researcher had listed a list of stages required for the botnet to respawn a malicious activity. https://twitter.com/benkow_/status/1164899159431946240 Therefore, Emotet’s arrival was not a surprise to many security researchers, as it was expected that the Emotet botnet would revive sooner or later. How does the Emotet botnet function? Discovered in 2014, Emotet was originally designed as a banking trojan to target mostly German and Austrian bank customers by stealing their login credentials. However, over time it has evolved into a versatile and effective malware attack. Once a device is infected, the Emotet botnet tries to penetrate the associated systems via brute-force attacks. This enables Emotnet to perform DDoS attacks or to send out spam emails after obtaining a user’s financial data, browsing history, saved passwords, and Bitcoin wallets. On the other hand, the infected machine comes in contact with Emotet’s Command and Control (C&C) servers to receive updates. It also uses its C&C servers as a junkyard for storing the stolen data. Per Cyren, a single Emotet bot can send a few hundred thousand emails in just one hour, which means that it is capable of sending a few million emails in a day. Emotet delivers modules to extract passwords from local apps, which is then spread sideways to other computers on the same network. It is also capable of stealing the entire email thread to be later reused for spam campaigns. Emotet also provides Malware-as-a-Service (MaaS) to other malware groups to rent access to the Emotet-infected computers. Meanwhile, many people on Twitter are sharing details about Emotet for others to watch out. https://twitter.com/BenAylett/status/1174560327649746944 https://twitter.com/papa_anniekey/status/1173763993325826049 https://twitter.com/evanderburg/status/1174073569254395904 Interested readers can check out the Malware security analysis report for more information. Also, head over to BleepingComputer for more details. Latest news in Security LastPass patched a security vulnerability from the extensions generated on pop-up windows An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18 UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 13289

article-image-google-makes-major-inroads-into-healthcare-tech-by-absorbing-deepmind-health
Amrata Joshi
14 Nov 2018
3 min read
Save for later

Google makes major inroads into healthcare tech by absorbing DeepMind Health

Amrata Joshi
14 Nov 2018
3 min read
Yesterday, Google announced that it is absorbing DeepMind Health, a London-based AI lab. In 2014, DeepMind was acquired by Google for £400 million. One of the reasons for DeepMind to join hands with Google in 2014 was the opportunity to use Google’s scale and experience in building billion-user products. Google and DeepMind Health together working on Streams The team at DeepMind introduced Streams in 2017. It was first rolled out at the Royal Free Hospital, where it is primarily used to identify and treat acute kidney injury (AKI). This app provides real-time alerts and information, pushing the right information to the right clinician at the right time. It also brings together important medical information like blood test results in one place. It helps the clinicians at our partner hospitals to spot serious issues while they are on the move. Streams app was developed to help the UK’s National Health Service (NHS). The need for Artificial Intelligence in Streams The team at DeepMind was keen on using AI because of the potential it has to revolutionize the understanding of diseases. AI could possibly help in knowing the root cause of the disease by understanding as to how they develop. This could, in turn, help scientists discover new ways of treatment. The team at DeepMind plans to work on a number of innovative research projects, such as using AI to spot eye disease in routine scans. The goal of DeepMind is to make Streams an AI-powered assistant for nurses and doctors everywhere. This could be possible by combining the best algorithms with intuitive design, all backed up by rigorous evidence. The future of Streams Acute kidney injury (AKI) is responsible for 40,000 deaths in the UK every year. With Streams now powered by the intelligence of teams from DeepMind Health and Google, the scenario might change! Antitrust and privacy concerns Last year, the Royal Free NHS Foundation Trust in London went against data protection rules and gave 1.6 million patient records to DeepMind for a trial. Tension is now increasing for the privacy advocates in the UK because Google is getting its hands on healthcare related information. The data could be misused in the future. Many have given a negative response to this news and are opposing it. As DeepMind had promised before to not share personally identifiable health data with Google, this new move has got many, questioning the intention of DeepMind. https://twitter.com/juliapowles/status/1062417183404445696 https://twitter.com/DeepMind_Health/status/1062389671576113155 https://twitter.com/TomValletti/status/1062457943382245378 Read more about this news on DeepMind’s official blog post. DeepMind open sources TRFL, a new library of reinforcement learning building blocks Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 13286

article-image-advocacy-groups-push-ftc-to-fine-facebook-and-break-it-up-for-repeatedly-violating-the-consent-order-and-unfair-business-practices
Amrata Joshi
25 Jan 2019
4 min read
Save for later

Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices

Amrata Joshi
25 Jan 2019
4 min read
Lately, Facebook has been in news for its data breaches and issues related to its illegal data sharing. To add to its sorrows, yesterday, advocacy groups such as Open Market Institute, Color of Change, and the Electronic Privacy Information Center among others, wrote to the Federal Trade Commission, requesting the government to intervene into how Facebook operates. The letter had a list of actions that the FTC could take which includes a multibillion-dollar fine, changing the company’s hiring practices, and breaking up Facebook for abusing its market position. Last week, Federal Trade Commission (FTC) officials were planning to impose a fine of over $22.5 billion on Facebook according to a report by Washington Post. As per the revelations made last year, over 87 million users’ data was given to Cambridge Analytica, a political consulting firm, without users’ consent. As a result of which, Facebook was fined £500,000, last October. This time Facebook might have to pay more than $22.5 million, the fine which was imposed on Google for tracking users of Apple’s Safari web browser in 2012. As per FTC, Facebook may have violated a legally binding agreement with the government to protect the privacy of users’ personal data. As a result of its Cambridge Analytica scandal, and the subsequent issues with data and privacy, advocacy groups are now calling for Facebook to be broken up because of the privacy violations and repeated consumer data breaches. The letter written to FTC by the advocacy group reads, “The record of repeated violations of the consent order can no longer be ignored. The company’s business practices have imposed enormous costs on the privacy and security of Americans, children and communities of color, and the health of democratic institutions in the United States and around the world.” According to the groups, it has been almost ten years since many organizations first brought the commission’s attention to Facebook’s unfair business practices that threatened the privacy of the consumers. The letter reads, “Facebook has violated the consent order on numerous occasions, involving the personal data of millions, possibly billions, of users of its services. Based on the duration of the violations, the scope of the violations, and the number of users impacted by the violations, we would expect that the fine in this case would be at least two orders of magnitude greater than any previous fine.” According to organizations like Open Market Institute and Color of Change, Facebook should be required to give up $2 billion as fine and divest ownership of Instagram and WhatsApp for failing to protect user data on those platforms as well.The groups have urged the FTC to require Facebook to comply with Fair Information Practices for all future uses of personal data across all services for all companies. The letter reads, “Given that Facebook’s violations are so numerous in scale, severe in nature, impactful for such a large portion of the American public and central to the company’s business model, and given the company’s massive size and influence over American consumers, penalties and remedies that go far beyond the Commission’s recent actions are called for.” According to the letter, Facebook breached its commitments to the Commission regarding the protection of WhatsApp user data. The letter further reads, “Facebook has operated for too long with too little democratic accountability. That should now end. At issue are not only the rights of consumers but also those of citizens. It should be for users of the services and for democratic institutions to determine the future of Facebook.” According to The Verge, lawmakers have been quiet on breaking up Facebook. In an interview with The Verge, Sen. Mark Warner (D-VA), one of the senators at the forefront, said that breaking up the company was more of a “last resort.” According to U.S. Securities and Exchange Commission filings reviewed by The Hill, the largest five tech companies Amazon, Apple, Google, Facebook and Twitter lobbied on a variety of issues, including trade, data privacy, immigration and copyright issues. Mark Zuckerberg, chief executive of Facebook, even got testified before Congress last year. Facebook lobbied $12.6 million. It seems that the data privacy issues made Facebook get into lobbying. Facebook AI research introduces enhanced LASER library that allows zero-shot transfer across 93 languages Russia opens civil cases against Facebook and Twitter over local data laws Trick or Treat – New Facebook Community Actions for users to create petitions and connect with public officials  
Read more
  • 0
  • 0
  • 13269

article-image-masonite-2-0-released-a-python-web-development-framework
Sugandha Lahoti
18 Jun 2018
2 min read
Save for later

Masonite 2.0 released, a Python web development framework

Sugandha Lahoti
18 Jun 2018
2 min read
Masonite, the popular Python web development framework, has released a new version. Masonite 2.0 comes with several new features to Masonite including new status codes, database seeding, built in cron scheduling, controller constructor resolving, speed improvements and much more. A new ‘Tinker’ Command Masonite 2.0 adds a new Tinker command that starts a Python shell and imports the container. It works as a great debugging tool and can be used for verifying that objects are loaded into the container correctly. A new Task Scheduler Masonite 2.0 adds a task scheduler,  a new default package that allows scheduling recurring tasks. You can read about the Masonite Scheduler under the Task Scheduling documentation. Automatic Server Reloading A huge update to Masonite is the new --reload flag on the serve command. Now the server will automatically restart when it detects a file change. You can use the -r flag as a shorthand. Autoloading With the new autoloading feature, you can list directories in the AUTOLOAD constant in the config/application.py file and it will automatically load all classes into the container. Autoloading is great for loading command and models into the container when the server starts up. Database Seeding Support Masonite 2.0 adds the ability to seed the database with dummy data. Seeding the database helps in populating the database with data that would be needed for future development. Explicitly Imported Providers Providers are now explicitly imported at the top of the file and added to the PROVIDERS list, located in config/providers.py. This completely removes the need for string providers and boosts the performance of the application substantially. Status Code Provider Masonite 2 removes the bland error codes such as 404 and 500 errors and replaces them with a cleaner view. It also allows adding of custom error pages. Upgrading from Masonite 1.6 to Masonite 2.0 Masonite 1.6 to Masonite 2.0 has quite a large number of changes and updates in a single release. However, upgrading takes only around 30 mins for an average sized project. Read the Masonite upgrade guide for a step-by-step guide to upgrading. You can read the release notes, for the full list of features. Python web development: Django vs Flask in 2018 What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal Should you move to Python 3? 7 Python experts’ opinions
Read more
  • 0
  • 0
  • 13268
article-image-space-final-frontier-ai-nasa-kepler-latest-ai-backed-discovery
Savia Lobo
12 Dec 2017
4 min read
Save for later

Is space the final frontier for AI? NASA to reveal Kepler's latest AI backed discovery

Savia Lobo
12 Dec 2017
4 min read
Artificial Intelligence is helping NASA in their expedition! NASA’s Kepler, the planet hunting telescope, has made a brand new discovery. To announce this recent breakthrough, NASA will be holding a major press conference this Thursday afternoon. Kepler, launched in 2009, has discovered numerous planets (exoplanets) outside our solar system, some of these exoplanets could also support life. In the year 2014, the telescope began a major mission called K2, which hunts for more exoplanets while studying other cosmic phenomena. The Kepler mission has previously come up with huge discoveries, and has surprisingly estimated that the universe includes numerous planets that could support life. This recent breakthrough was carried out with the help of Google’s Artificial intelligence, which analyzes the data coming from Kepler. With the help of Google’s AI, NASA expects to cut down the time required to choose a planet, which shows possibility of life. This is nothing short of the stuff they show on Christopher Nolan’s famous movie Interstellar, where Joseph Coop, an astronaut, goes in search of other planets to colonize as sustaining life on Earth becomes increasingly difficult. After decades of research and exploratory missions to far flung planets the research team arrives at 3 potential candidate planets for most likely suitable for human life. An AI backed Kepler could have saved that team time in narrowing down the list of planets with potential to find life. In short, the AI backing will ease out tasks for the scientists by allowing them to brush through the data sent by the telescope and easily pick up planets that might seem interesting to them for further exploration. NASA said that Paul Hertz - director of NASA’s astrophysics division, Christopher Shallue - a senior Google software engineer, Jessie Dotson - Kepler project scientist, NASA’s Ames Research Center in Silicon Valley, California, and two scientists would be part of the upcoming conference. Very little is known about the conference itself as NASA is tight-lipped about this press release headlined cryptically as, “NASA’s press release states that machine learning “demonstrates new ways of analyzing Kepler data.” Let’s keep our enthusiasm intact for the disclosure of Kepler’s latest breakthrough this Thursday...To watch the conference live on Thursday, stream into NASA’s official website. Until then, here are some ways we think AI could assist humans with space exploration: Processing Data collected in space: The ENVISAT satellite collects 400 terabytes of data per year. As time advances, the count may reach 720 terabytes a day. In order to process this data, scientists have created a network of computers. Each computer receives a small data packet and processes it with the help of AI, before regrouping the data packets back together. Using this data, scientists would be able to track the activities taking place with respect to the Earth’s atmosphere, keep track of solar activity, and so on. Rapid communications to and fro the earth:  While talking to astronauts or computers orbiting around the earth, it takes less than a second for a radio wave to send signals to Earth from the International Space Station (ISS). Also, the time delay is different for different satellites and planets. Thus it isn’t feasible to relay commands for each action from the Earth. With the help of AI we can have on-board computers, which will think for themselves and adjust the communication time frame accordingly. Machines will walk through the planet to see if the possibility of life exists: Not all planets are suitable for humans to walk on. And even if they were, given the journey time for a mission, it may not always be practical to have humans do the on-ground study themselves. For instance, the moons of Jupiter--specifically Ganymede and Europa are interesting places to find life, due to the presence of vast liquid water oceans. However, the intense radiation fields around Jupiter are sterile for human survival. Hence, expeditions to find possibility of life on planets other than the Earth, can be carried out by machines with the help of AI. NASA is in the process of developing a robot called Robonaut for the International Space Station. Eventually, the expectation is that Robonaut would carry out risky spacewalks while astronauts manage it from the safety of inside the space station. There are many other ways how AI can be used to guide our space explorations. Scientists are still in the process of finding out unique ways in which AI can assist them in their space missions.  
Read more
  • 0
  • 0
  • 13266

article-image-dart-2-2-is-out-with-support-for-set-literals-and-more
Savia Lobo
27 Feb 2019
2 min read
Save for later

Dart 2.2 is out with support for set literals and more!

Savia Lobo
27 Feb 2019
2 min read
Michael Thomsen, the Project Manager for Dart announced the stable release of the general purpose programming language, Dart 2.2. This version, which is an incremental update to v2, offers improved performance of ahead-of-time (AOT) compiled native code and a new set literal language feature. Improvements in Dart 2.2 Improved AOT performance Developers have worked on improving the AOT performance by 11–16% on microbenchmarks (at the cost of a ~1% increase in code size). Prior to this optimization, developers had to make several lookups to an object pool to determine the destination address. However, the optimized AOT code is now able to call the destination directly using a PC-relative call. Extended Literals to support sets Dart supported the literal syntax only for Lists and Maps, which caused difficulties in initializing Sets as it had to be initialized via a list as follows: Set<String> currencies = Set.of(['EUR', 'USD', 'JPY']); This code proved to be inefficient due to the lack of literal support and also made currencies a compile-time constant. With Dart 2.2’s extension of literals to support sets, users can initialize a set and make it const using a convenient new syntax: const Set<String> currencies = {'EUR', 'USD', 'JPY'}; Updated Dart language Specification Dart 2.2 includes the up-to-date ‘Dart language specification’ with the spec source moved to a new language repository. Developers have also added continuous integration to ensure a rolling draft specification is generated in PDF format as and when the specification for future versions of the Dart language evolves. Both the 2.2 version and rolling Dart 2.x specifications are available on the Dart specification page. To know more about this announcement in detail, visit Michael Thomsen’s blog on Medium. Google Dart 2.1 released with improved performance and usability Google’s Dart hits version 2.0 with major changes for developers Is Dart programming dead already?  
Read more
  • 0
  • 0
  • 13262

article-image-rust-1-36-0-releases-with-a-stabilized-future-trait-nll-for-rust-2015-and-more
Bhagyashree R
05 Jul 2019
3 min read
Save for later

Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more

Bhagyashree R
05 Jul 2019
3 min read
Yesterday, the team behind Rust announced the release of Rust 1.36.0. This release brings a stabilized 'Future' trait, NLL for Rust 2015, stabilized Alloc crate as the core allocation and collections library, a new --offline flag for Cargo, and more. Following are some of the updates in Rust 1.36.0: The stabilized 'Future' trait A ‘Future’ in Rust represents an asynchronous value that allows a thread to continue doing useful work while it waits for the value to become available. This trait has been long-awaited by Rust developers and with this release, it has been finally stabilized. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future,” the Rust release team added. The alloc crate is stable The ‘std’ crate of the standard library provides types like Box<T> and OS functionality. But, the problem is it requires a global allocator and other OS capabilities. Beginning with Rust 1.36.0, the parts of std that are dependent on a global allocator will now be available in the ‘alloc’ crate and std will re-export these parts later. Use MaybeUninit<T> instead of mem::uninitialized Previously, the ‘mem::uninitialized’ function allowed you to bypass Rust’s memory-initialization checks by pretending to generate a value of type T without doing anything. Though the function has proven handy while lazily allocating arrays, it can be dangerous in many other scenarios as the Rust compiler just assumes that values are properly initialized. In Rust 1.36.0, the MaybeUninit<T> type has been stabilized to solve this problem. Now, the Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. This will enable you to do gradual initialization more safely and eventually use ‘.assume_init()’. Non-lexical lifetimes (NLL) for Rust 2015 The Rust team introduced NLL in December last year when announcing Rust 1.31.0. It is an improvement to Rust’s static model of lifetimes to make the borrow checker smarter and more user-friendly. When it was first announced, it was only stabilized for Rust 2018. Now the team has backported it to Rust 2015 as well. In the future, we can expect all Rust editions to use NLL. --offline support in Cargo Previously, the Rust package manager, Cargo used to exit with an error if it needed to access the network and the network was not available. Rust 1.36.0 comes with a new flag called ‘--offline’ that makes the dependency resolution algorithm to only use locally cached dependencies, even if there might be a newer version. These were some of the updates in Rust 1.36.0. Read the official announcement to know more in detail. Introducing Vector, a high-performance data router, written in Rust Brave ad-blocker gives 69x better performance with its new engine written in Rust Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 13259
article-image-mozilla-is-exploring-ways-to-reduce-notification-permission-prompt-spam-in-firefox
Bhagyashree R
03 Apr 2019
3 min read
Save for later

Mozilla is exploring ways to reduce notification permission prompt spam in Firefox

Bhagyashree R
03 Apr 2019
3 min read
Yesterday, Mozilla announced that it is launching two experiments to understand how they can reduce “permission prompt spam” in Firefox. Last year, it did add a feature in Firefox that allows users to completely block the permission prompts. It is now planning to come up with a new option for those who do not want to take such a drastic step. Permission prompts have become quite common nowadays. It allows websites to get user permission for accessing powerful features when needed. But, often it gets annoying for users when they are shown unsolicited, out-of-context permission prompts, for instance, the ones that ask for permission to send push notifications. Mozilla's telemetry data shows that notifications prompt is the most frequently shown permission prompt, with about 18 million prompts shown on Firefox Beta from Dec 25 2018 to Jan 24 2019. Out of these 18 million prompts, not even 3 percent were accepted by users. And 19 percent of the prompts caused users to immediately leave the site. Such a low acceptance of this feature led to the following two conclusions: One, that there are some websites that show the notification prompt without the intent of using it to enhance the user experience, or fail to express their intent in the prompt clearly. Second, there are websites that show the notification permission prompt for too early, without giving users enough time to decide if they want them. To get a better idea on how and when websites should ask for notification permissions, Mozilla is launching these two experiments: Experiment 1: Requiring user interaction for notification permission prompts in Nightly 68 The first experiment involves requiring a user gesture, like a click or a keystroke to trigger the code that requests permission. From April 1st to 29th, requests for permission to use Notifications will be temporarily denied unless they follow a click or keystroke. In the first two weeks, no user-facing notifications will be shown when the restriction is applied to a website. In the last two weeks of this experiment, an animated icon will be shown in the address bar when this restriction is applied. If the user clicks on the icon, they will be presented with the prompt at that time. Experiment 2: Collecting interaction and environment data around permission prompts from release users Mozilla believes that requiring user interaction is not the perfect solution to the permission spam problem. To come up with a better approach, it wants to get more insights about how Firefox users interact with permission prompts. So, they are planning to launch an experiment in Firefox Release 67 to gather information about the circumstances in which users interact with permission prompts. They will collect information about: Have they been on the site for a long time? Have they rejected a lot of permission prompts before? With this experiment, it aims to collect a set of possible heuristics for future permission prompt restrictions. To know more in detail, visit Mozilla’s official blog. Mozilla launches Firefox Lockbox, a password manager for Android Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser  
Read more
  • 0
  • 0
  • 13258

article-image-an-ai-startup-now-wants-to-monitor-your-kids-activities-to-help-them-grow-securly
Natasha Mathur
10 Jan 2019
3 min read
Save for later

An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’

Natasha Mathur
10 Jan 2019
3 min read
AI is everywhere and now it’s helping monitor kid’s activities to maintain safety across different schools and prevent school shooting incidents. An AI startup called Securly, co-founded by Vinay Mahadik and Bharath Madhusudan focusses on student safety with the help of features such as web filtering, cyberbullying monitoring, and self-harm alerts. Its cloud-based web filters maintain an age-appropriate internet, monitors bullying and ensures that schools remain CIPA-compliant. Another feature called ‘auditor’ in Securly uses Google’s Gmail service to send alerts when the risk of bullying or self-harm is detected. There’s also a tipline feature that sends anonymous tips to students over phone, text, or email.   The machine learning algorithms used by Securly are trained by safety specialists using safe and unsafe content. So, once the algorithms find any content as disturbing, the 24×7 student safety experts evaluate further context behind the activity and reach out to schools and authorities as needed. Securly raised $16m in Series B round of funding last month. The company managed to raise $24 million and was led by Defy Partners. The company now wants to use these funds to amp up their research and development further in K-12 safety. Moreover, Mahadik is also focussing on technologies that can be used across schools without hampering kids’ privacy. He told Forbes, ”You could say show me something that happened on the playground where a bunch of kids punched or kicked a certain kid. If you can avoid personally identifying kids and handle the data responsibly, some tech like this could be beneficial”. Securly currently has over 2,000 paid school districts using its free Chromebook filtering and email auditing services. However, public reaction to the news isn’t entirely positive as many people are criticizing the startup of shifting the focus from the real issue ( i.e. providing kids with much-needed counseling and psychological help, implementing family counseling programs, etc ) and is instead promoting tracking every kid’s move to make sure they don’t ever falter. Securly is not the only surveillance service that has received heavy criticism. Predictim, an online service that uses AI to analyze risks associated with a babysitter, also got under the spotlight, over the concerns of its biased algorithms and for violating the privacy of the babysitters. https://twitter.com/ashedryden/status/1083084280736202752 https://twitter.com/ashedryden/status/1083087232897028096 https://twitter.com/jennifershehane/status/1083100079123124224 https://twitter.com/dmakogon/status/1083092624410660865 Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job Center for the governance of AI releases report on American attitudes and opinions on AI Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women
Read more
  • 0
  • 0
  • 13257
Modal Close icon
Modal Close icon