Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-apple-revoked-facebook-developer-certificates-due-to-misuse-of-apples-enterprise-developer-program-google-also-disabled-its-ios-research-app
Savia Lobo
31 Jan 2019
3 min read
Save for later

Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app

Savia Lobo
31 Jan 2019
3 min read
Facebook employees are experiencing turbulent times as Apple decides to revoke the social media giant’s developer certificates. This is due to a TechCrunch report that said Facebook paid 20$/month to users including teens to install the ‘Facebook research app” on their devices which allowed them to track their mobile and web browsing activities. Following the revoke, Facebook employees will not be able to access early versions of Facebook apps such as Instagram and Messenger, and many other activities such as food ordering, locating an area on the map and much more. Yesterday, Apple announced that they have shut down the Facebook research app for iOS. According to Apple, “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple”. The company further said, “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Per Mashable report, “Facebook employees argued that Apple's move was merely an attempt to distract from an embarrassing FaceTime bug that went public earlier in the week.” An employee commented, “Anything to take the heat off the FaceTime security breach.” Facebook also said that  it’s “working closely with Apple to reinstate our most critical internal apps immediately.” Mark Zuckerberg has also received a stern letter from Senator Mark Warner including a list of questions about the company’s data gathering practices, post the TechCrunch report went viral. In a statement, he mentioned, “It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding of how much data they’re handing over and how sensitive it is.” Google disabled its iOS app too Similar to Facebook, Google too distributed a private app, Screenwise Meter, to monitor how people use their iPhones and rewarded the users with Google’s Opinion Rewards program gift cards in exchange for collecting information on their internet usage. However, yesterday, Google announced that it has disabled the iOS app. Google’s Screenwise Meter app has been a part of a program that’s been around since 2012. It first started tracking household web access through a Chrome extension and a special Google-provided tracking router. The app is open to anyone above 18 but allows users aged 13 and above to join the program if they’re in the same household. Facebook’s tracking app, on the other hand, targeted people between the ages of 13 and 25. A Google spokesperson told The Verge, “The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices. This app is completely voluntary and always has been. We’ve been upfront with users about the way we use their data in this app, we have no access to encrypted data in apps and on devices, and users can opt out of the program at any time.” To know more about this news, head over to The Verge. Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification Stanford experiment results on how deactivating Facebook affects social welfare measures Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports
Read more
  • 0
  • 0
  • 13746

article-image-net-core-3-preview-2-is-here
Prasad Ramesh
31 Jan 2019
2 min read
Save for later

.NET Core 3 Preview 2 is here!

Prasad Ramesh
31 Jan 2019
2 min read
This Tuesday, Microsoft announced .NET Core 3 Preview 2 with new features in .NET Core 3 and C# 8. C# 8 The eighth iteration of C# is a major release and includes many new features. Declarations Statements don’t need to be indented now. Switch expressions C# 8 comes with switch expressions in which you can use the new syntax. Terser syntax returns a value as it is an expression. It’s fully integrated with pattern matching. Async streams The compiler and framework libraries should match correctly for async streams to work. You will need .NET Core 3.0 Preview 2 and Visual Studio 2019 Preview 2. Alternatively, you can also use the C# extension for Visual Studio Code. Floating point improvements in IEEE The goal is to expose all operations that are needed and they are behaviorally compliant with the IEEE spec. A fast in-box JSON Writer & JSON document Two new objects were added—System.Text.Json.Utf8JsonWriter and System.Text.Json.JsonDocument. Utf8JsonWriter The Utf8JsonWriter enables a high-performance, non-cached way to write UTF-8 encoded JSON text from common .NET types. JsonDocument System.Text.Json.JsonDocument is also added built on top of the Utf8JsonReader. JsonDocument provides enables parsing JSON data and builds a read-only Document Object Model (DOM). It can be queried to support enumeration and random access. Assembly Unloadability Assembly unloadability is a new ability of AssemblyLoaderContext. It is transparent and exposed with only a few new APIs. A loader context to be unloaded with this. This releases all of the memory for static fields, instantiated types, and the assembly itself. Visual Studio support Using .NET Core 3 for development requires using Visual Studio 2019. WPF and Windows Forms templates were added to the New Project Dialog for easier access via the command line. These were a select few updates from the new .NET Core 3 Preview 2, for a complete list of changes, visit the Microsoft Blog. Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced What to expect in ASP.NET Core 3.0 .NET Core 3.0 and .NET Framework 4.8 more details announced
Read more
  • 0
  • 0
  • 11892

article-image-facebook-researchers-show-random-methods-without-any-training-can-outperform-modern-sentence-embeddings-models-for-sentence-classification
Natasha Mathur
31 Jan 2019
4 min read
Save for later

Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification

Natasha Mathur
31 Jan 2019
4 min read
A pair of researchers, John Wieting, Carnegie Mellon University, and Douwe Kiela, Facebook AI Research, published a paper titled “No training required: Exploring random encoders for sentence classification”, earlier this week. Sentence embedding refers to a vector representation of the meaning of a sentence. It’s most often created by a transformation of word embeddings using a composition function, which is often nonlinear and recurrent in nature. Most of these word embeddings get initialized using pre-trained embeddings. These sentence embeddings are then used as features for a collection of downstream tasks (that receive data from the server). The paper explores three different approaches for computing sentence representations from pre-trained word embeddings (that use nothing but random parameterizations). It considers two well-known examples of sentence embeddings including, SkipThought (mentioned in Advances in neural information processing systems by Ryan Kiros), and InferSent (mentioned in Supervised learning of universal sentence representations from natural language inference data by Alexis Conneau).  As mentioned in the paper, SkipThought took around one month to train, while InferSent requires large amounts of annotated data. “We examine to what extent we can match the performance of these systems by exploring different ways for combining nothing but the pre-trained word embeddings. Our goal is not to obtain a new state of the art but to put the current state of the art methods on more solid footing”, states the researchers. Approaches used The paper mentions three different approaches for computing sentence representation from pre-trained word embeddings as follows: Bag of random embedding projections (BOREP): In this method, a single random projection is applied in a standard bag-of-words (or bag-of-embeddings) model. A matrix is randomly initialized consisting of a dimension of the projection and dimension of an input word embedding. The values for the matrix are then sampled uniformly. Random LSTMs: This approach makes use of bidirectional LSTMs without any training involved. The LSTM weight matrices and their other corresponding biases get initialized uniformly at random. Echo state networks (ESNs): Echo State Networks (ESNs) were primarily designed and used for sequence prediction problems, where given a sequence X,  a label y is predicted for each step in the sequence. The main goal of using echo state networks is to minimize the error between the predicted yˆ and the target y at each timestep. In the paper, the researchers have diverged from the typical per-timestep ESN setting, and have instead used ESN to produce a random representation of a sentence.  A bidirectional ESN is used where the reservoir states get concatenated for both directions. These states are then pooled over to generate a sentence representation. Results For evaluation purposes, following set of downstream tasks are used: sentiment analysis (MR, SST), question-type (TREC), product reviews (CR), subjectivity (SUBJ), opinion polarity (MPQA), paraphrasing (MRPC), entailment (SICK-E, SNLI) and semantic relatedness (SICK-R, STSB). The three models are evaluated against random sentence encoders, InferSent and SkipThought models. As per the results: When comparing the random sentence encoders, ESNs outperformed BOREP and RandLSTM for all downstream tasks. When compared to InferSent, it shows that the performance gains over random methods are not as phenomenal, despite the fact that InferSent requires annotated data and takes more time to train as opposed to the random sentence encoders that can be applied immediately. For SkipThought, the gain over random methods (which do have better word embeddings) is even smaller. SkipThought took a very long time to train, and in the case of SICK-E, it’s better to use BOREP. ESN also outperforms SkipThought in most of the downstream tasks. “The point of these results is not that random methods are better than these other encoders, but rather that we can get very close and sometimes even outperform those methods without any training at all, from just using the pre-trained word embeddings,” state the researchers. For more information, check out the official research paper. Amazon Alexa AI researchers develop new method to compress Neural Networks and preserves accuracy of system Researchers introduce a machine learning model where the learning cannot be proved Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes
Read more
  • 0
  • 0
  • 11031

article-image-will-putting-limits-on-how-much-javascript-is-loaded-by-a-website-help-prevent-user-resource-abuse
Bhagyashree R
31 Jan 2019
3 min read
Save for later

Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?

Bhagyashree R
31 Jan 2019
3 min read
Yesterday, Craig Hockenberry, who is a Partner at The Iconfactory, reported a bug on WebKit, which focuses on adding a limit on how much JavaScript code a website can load to avoid resource abuse of user computers. Hockenberry feels that though content blocking has helped in reducing the resource abuse and hence providing better performance and better battery life, there are few downsides of using content blockers. His bug report said, “it's hurting many smaller sites that rely on advertising to keep the lights on. More and more of these sites are pleading to disable content blockers.” This results in collateral damage to smaller sites. As a solution to this, he suggested that we need to find a way to incentivize JavaScript developers who keep their codebase smaller and minimal. “Great code happens when developers are given resource constraints... Lack of computing resources inspires creativity”, he adds. As an end result, he believes that we can allow sites to show as many advertisements as they want, but keeping the overall size under a fixed amount. He believes that we can also ask users for permission by adding a simple dialog box, for example, "The site example.com uses 5 MB of scripting. Allow it?” This bug report triggered a discussion on Hacker News, and though few users agreed to his suggestion most were against it. Some developers mentioned that users usually do not read the dialogs and blindly click OK to get the dialog to go away. And, even if users read the dialog, they will not be knowing how much JavaScript code is too much. “There's no context to tell her whether 5MB is a lot, or how it compares to payloads delivered by similar sites. It just expects her to have a strong opinion on a subject that nobody who isn't a coder themselves would have an opinion about,” he added. Other ways to prevent JavaScript code from slowing down browsers Despite the disagreement, developers do agree that there is a need for user-friendly resource limitations in browsers and some suggested the other ways we can prevent JavaScript bloat. One of them said it is good to add resource-limiting tabs on CPU usage, number of HTTP requests and memory usage: “CPU usage allows an initial burst, but after a few seconds dial down to max ~0.5% of CPU, with additional bursts allowed after any user interaction like click or keyboard) Number of HTTP requests (again, initial bursts allowed and in response to user interaction, but radically delay/queue requests for the sites that try to load a new ad every second even after the page has been loaded for 10 minutes) Memory usage (probably the hardest one to get right though)” Another user adds, “With that said, I do hope we're able to figure out how to treat web "sites" and web "apps" differently - for the former, I want as little JS as possible since that just gets in the way of content, but for the latter, the JS is necessary to get the app running, and I don't mind if its a few megabytes in size.” You can read the bug reported on WebKit Bugzilla. D3.js 5.8.0, a JavaScript library for interactive data visualizations in browsers, is now out! 16 JavaScript frameworks developers should learn in 2019 npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 16755

article-image-stanford-experiment-results-on-how-deactivating-facebook-affects-social-welfare-measures
Sugandha Lahoti
31 Jan 2019
3 min read
Save for later

Stanford experiment results on how deactivating Facebook affects social welfare measures

Sugandha Lahoti
31 Jan 2019
3 min read
Stanford researchers have recently published a research paper, “The Welfare Effects of Social Media” where they conducted an experiment to understand how Facebook affects a range of individuals focusing on US users in the runup to the 2018 midterm election. Reducing screen time has been an important debating topic in recent times. Excess use of social media platforms hampers face-to-face social interactions. At a broader social level, social media platforms may increase political polarization and are also the primary source of spreading fake news and misinformation online. Per the research paper, “Stanford researchers evaluated the extent to which time on Facebook substitutes for alternative online and offline activities. They studied Facebook’s broader political externalities via measures of news knowledge, awareness of misinformation, political engagement, and political polarization. They also analyze the extent to which behavioral forces like addiction and misprediction may cause sub-optimal consumption choices, by looking at how usage and valuation of Facebook change after the experiment.” What was the experiment? In their experiment, Stanford researchers recruited a sample of 2,844 users using Facebook ads. The ad said, “Participate in an online research study about internet browsing and earn an easy $30 in electronic gift cards.” Next, they determined participants willingness-to-accept (WTA) to deactivate their Facebook accounts for a period of four weeks ending just after the election. 58 percent of these subjects with WTA less than $102 were randomly assigned to either a Treatment group that was paid to deactivate or a Control group that was not. Results of the experiment The results were divided into four patterns. Substitution patterns Deactivating Facebook freed up 60 minutes per day for the average person in the Treatment group for other offline activities such as watching television alone and spending time with friends and family. The Treatment group also reported spending 15 percent less time, consuming news. Political externalities Facebook deactivation significantly reduced news knowledge and attention to politics. The Treatment group was less likely to say they follow news about politics or the President, and less able to correctly answer factual questions about recent news events. The overall index of news knowledge fell by 0.19 standard deviations. The overall index of political polarization fell by 0.16 standard deviations. Subjective well-being Deactivation caused small but significant improvements in well-being and self-reported happiness, life satisfaction, depression, and anxiety. The overall index of subjective well-being improved by 0.09 standard deviations. Facebook’s role in society The Treatment group’s reported usage of the Facebook mobile app was about 12 minutes (23 percent) lower than in Control. The post-experiment Facebook use is 0.61 standard deviations lower in Treatment than in Control. The researchers concluded that deactivation caused people to appreciate Facebook’s both positive and negative impacts on their lives. Majority of the treatment group agreed deactivation was good for them, but they were also more likely to think that people would miss Facebook if they used it less. In conclusion, the opposing effects on these specific metrics cancel out, so the overall index of opinions about Facebook is unaffected, mentions the research paper. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers… Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities.
Read more
  • 0
  • 0
  • 11329

article-image-sbi-data-leak-in-india-results-in-information-of-millions-of-customers-exposed-online
Prasad Ramesh
31 Jan 2019
2 min read
Save for later

SBI data leak in India results in information of millions of customers exposed online

Prasad Ramesh
31 Jan 2019
2 min read
The State bank of India, the largest bank of the nation leaked data of millions of its account holders. In the SBI data leak, Information like bank balances and recent transactions were visible online due to the leak. As per a TechCrunch report, two months of data was stored on a Mumbai based data center. An SMS and call based system was used by customers to query information about their bank accounts. The SBI server was not password protected allowing anyone with an internet connection to access such data if they knew where to find the data. It is unclear as to how long the server was unprotected but a security researcher found about this and reported it to TechCrunch. SBI Quick is a service that enables SBI customers to perform various actions with their bank account via SMS, miss calls etc. Customers can then get information like balance, recent transactions on their phone. For people not using a smartphone, this is very useful. The report says that the back-end SMS system was exposed leading to the SBI data leak. Since the server was not password protected, information like phone number, bank balance, recent transactions, and even partial account numbers were exposed. Speaking to TechCrunch, security researcher Karan Saini said: “The data available could potentially be used to profile and target individuals that are known to have high account balances.” He added that knowing a phone number “could be used to aid social engineering attacks — which is one the most common attack vector here with regard to financial fraud.” The report also says that the server has been secured now. GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising How to protect your VPN from Data Leaks A WordPress plugin vulnerability is leaking Twitter account information of users making them vulnerable to compromise
Read more
  • 0
  • 0
  • 4206
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-announces-internet-explorer-10-will-reach-end-of-life-by-january-2020
Bhagyashree R
30 Jan 2019
2 min read
Save for later

Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020

Bhagyashree R
30 Jan 2019
2 min read
Along with Windows 7, Microsoft is also ending security updates and technical support for Internet Explorer 10 by January 2020 that it shared in a blog post yesterday, and users are advised to upgrade to IE11 by then. Support for IE10 and below ended back in 2016, except on a few environments like Windows server 2012 and some embedded versions and now Microsoft is just pulling the plug on those few remaining environments. Microsoft on their blog post wrote, “We encourage you to use the time available to pilot IE11 in your environments. Upgrading to the latest version of Internet Explorer will ease the migration path to Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT, and unlock the next generation of technology and productivity. It will also allow you to reduce the number of Internet Explorer versions you support in your environment.” Commercial customers of Windows Server 2012 and Windows Embedded 8 Standard can download IE11 via the Microsoft Update Catalog or IE11 upgrade through Windows Update and Windows Server Update (WSUS) that Microsoft will publish later this year. IE10 will continue to receive updates for Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT throughout 2019. You can find these updates on the Update Catalog and WSUS channel as a Cumulative Update for Internet Explorer 10. Similarly, updates for IE11 will be labeled as Cumulative Update Internet Explorer 11 on the Microsoft Update Catalog, Windows Update, and WSUS. Many Hacker News users are also speculating that the support of IE11 could also end by 2025. One of the users said, “If anyone is wondering about IE11, MS says "Internet Explorer 11 will continue receiving security updates and technical support for the lifecycle of the version of Windows on which it is installed. Extended support for Windows 10 ends on October 14, 2025. Extended support for Windows Server 2016 ends on January 11, 2027. Presumably one or those 2 dates could be considered the termination date for IE11.” Another Hacker News user believes, “...it is good time to start considering ending IE11 support as well, especially with Chromium-Edge coming out later this year. Edge is getting a Chromium back-end with talk of Windows 7 and 8 support. So, perhaps that's a strategy to kill IE11 too (fingers crossed).” Read the official announcement by Microsoft to know more details. Microsoft Office 365 now available on the Mac App Store Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft’s Bing ‘back to normal’ in China
Read more
  • 0
  • 0
  • 17275

article-image-facebook-pays-users-20-month-to-install-a-facebook-research-vpn-that-spies-on-their-phone-and-web-activities-techcrunch-reports
Savia Lobo
30 Jan 2019
4 min read
Save for later

Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports

Savia Lobo
30 Jan 2019
4 min read
TechCrunch, in their recent report mentioned, Facebook has been spying on user’s data and internet habits by paying $20 a month, plus referral fees for users aged between 13 - 35  to install a ‘Facebook Research’ VPN via beta testing services such as Applause, BetaBound, and uTest. The VPN allows Facebook to have an eye on user’s web as well as phone activity. Such activity was found similar to Facebook’s Onavo Project app, which was banned by Apple in June 2018 and totally discarded in August. Launched in 2016, the Facebook research project was renamed to Project Atlas mid-2018 after the backlash against Onavo. One of the companies, uTest, was also running ads for a "paid social media research study" on Instagram and Snapchat, tweeted one of contributing TechCrunch editors to the report. https://twitter.com/JoshConstine/status/1090395755880173569 TechCrunch has also updated that “Facebook now tells TechCrunch it will shut down the iOS version of its Research app in the wake of our report.” According to the Techcrunch report, “Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity.” Guardian Mobile Firewall’s security expert Will Strafach, told TechCrunch, “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location-tracking apps you may have installed.” As part of the study, users were even asked to provide screenshots of their Amazon purchases. For underage users, Applause requires parental permission, and Facebook is mentioned in the consent agreement. The agreement also mentions this line about the company tracking your children, “There are no known risks associated with this project, however, you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” As highlighted by TechCrunch, the Facebook Research app sent data to an address which is affiliated with Onavo. https://twitter.com/chronic/status/1090397698803621889 A Facebook spokesperson wrote that the program is being misrepresented by TechCrunch and that there was never a lack of transparency surrounding it. As a response to this, Josh Constine, Editor at TechCrunch tweeted, “Here is my rebuttal to Facebook's statement regarding the characterization of our story. We stand by our report, and have a fully updated version here.” He also provided an updated report link followed by a snippet from the report. https://twitter.com/JoshConstine/status/1090519765452353536 https://twitter.com/matthewstoller/status/1090605150673215494 According to Will Strafach, who did the actual app research for TechCrunch, “"they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI. Also, the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like." According to a user on Hacker News, “By using a VPN they forced all traffic to go through their servers, and with the root certificate, they are able to monitor and gather data from every single app and website users visit/use. Which would include medical apps, chat apps, Maps/gps apps and even core operating system apps. So for users using Facebook's VPN they are effectively able to mine data which actually belongs to other apps/websites.” Another user writes, “How is this not in violation of most wiretapping laws? Facebook is not the common carrier in these cases. Both parties of conversations with teens are not consenting to the wiretapping, which is not allowed in many US states. I’m not sure teenage consent is considered “consent” and the parents aren’t a party to the conversations Facebook is wiretapping. Facebook is both paying people and recording the electronic communications.” To know more about this news, head over to TechCrunch’s complete report. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers Facebook releases a draft charter introducing a new content review board that would filter what goes online
Read more
  • 0
  • 0
  • 9241

article-image-outage-in-the-microsoft-365-and-gmail-made-users-unable-to-log-into-their-accounts
Savia Lobo
30 Jan 2019
2 min read
Save for later

Outage in the Microsoft 365 and Gmail made users unable to log into their accounts

Savia Lobo
30 Jan 2019
2 min read
Yesterday, the Microsoft suffered a worldwide issue affecting its various cloud services including Office 365, Azure Portal, Dynamics 365 and LinkedIn, where many users were unable to log into its cloud-based services. This outage issue was reported around 7.15 in the morning--Sydney time--and is believed to have affected multiple enterprise customers. Microsoft confirmed these problems and said that the network issues in the Azure Active Directory were to be blamed. In an update to customers, the Microsoft team said, “that have their authorization cached are unaffected by this issue, and new authentications are succeeded approximately 50 percent of the time.” They also tweeted that, “Engineers are investigating a Microsoft networking issue impacting customers' ability to log in to the Azure Portal.” https://twitter.com/AzureSupport/status/1090360382466605056 Downdetector, an outage tracking service, showed a heatmap of the problems, which show that the east coast of Australia, as well as New Zealand, felt the impact. Microsoft 365, tweeted today, “We've identified a third-party network provider issue that is affecting authentication to multiple Microsoft 365 services. We're moving services to an alternate network provider to resolve the issue.” https://twitter.com/GossiTheDog/status/1090579331502485505 To know more about this in detail, visit Microsoft’s official website. Gmail also suffers from a global outage Yesterday, Gmail users all around the globe reported a "404" error when they tried accessing the service around 11 am. Google's service status page listed no issues with Gmail at the time, but users clearly disagree. According to Outage Report and Down Detector, Gmail was down in nearly all of Europe, parts of North America, South America and Asia. https://twitter.com/zeefu/status/1090204478308012033 However, the issue with Gmail should now be resolved, as per a Google spokesperson. Internet Outage or Internet Manipulation? New America lists government interference, DDoS attacks as top reasons for Internet Outages across the world How Dropbox uses automated data center operations to reduce server outage and downtime Philips Hue’s second ongoing remote connectivity outage infuriates users
Read more
  • 0
  • 0
  • 13211

article-image-algorithmwatch-report-7-key-recommendations-for-improving-automated-decision-making-in-the-eu
Natasha Mathur
30 Jan 2019
5 min read
Save for later

AlgorithmWatch report: 7 key recommendations for improving Automated Decision-Making in the EU

Natasha Mathur
30 Jan 2019
5 min read
AlgorithmWatch, non-profit research, and advocacy organization released its report titled ‘Automating Society: Taking Stock of Automated Decision-Making in the EU’, in cooperation with Bertelsmann Stiftung ( a private operating foundation) supported by the Open Society Foundations, yesterday. The report includes findings compiled from 12 EU member states and the level of the EU surrounding the development and application of automated decision-making systems in all the countries. The report is based upon these findings and makes certain recommendations for policymakers in the EU and the Member States parliaments, the EU Commission, national governments, researchers, civil society organizations, and the private sector (companies and business associations). Let’s have a look at some of the key recommendations mentioned in the report. Focus on the application of ADMs that impact society The report states that given the popularity of ‘Artificial Intelligence’ right now, it is important to understand the real current challenges and impact of this tech on our societies. It gives an example of how ‘Predictive analytics’ that is used for determining the maintenance issues on production lines for yogurt, should not be the real concern rather predictive analytics used for tracking human behavior is where the real focus should be. The report states that there is a need for these systems to be democratically controlled in our society using a combination of regulatory tools, oversight mechanisms, and technology. Automated decision-making systems aren’t just a technology The report mentions that considering automated decision-making systems as just a technology while not considering it as a whole shift the debate surrounding questions of accuracy, and data quality. All parts of the framework should be considered while discussing the pros and cons of using a specific ADM application. What this means is that more questions should be asked around: What data does the system use? Is the use of that data legal? what decision-making model is applied to the data? Is there an issue of bias? why do governments even use specific ADM systems? Is automation being used as an option to save money? Empower citizens to adapt to new challenges As per the report, more focus should be put on enhancing citizens’ expertise to help them better determine the consequences of automated decision making. An example presented in the report is that of an English online course called “Elements of Artificial Intelligence” created to support the goal of helping Finnish people understand the challenges in ADM. The course was developed as a private-public partnership but has now become an integral part of the Finnish AI programme. This free course teaches citizens about basic concepts and applications of AI and machine learning with about 100,000 Finns enrolled in the course. Empower public administration to adapt to new challenges Just like empowering citizens is important, there’s also a need to empower the Public administration to ensure a high level of expertise inside its own institutions. This can help them either develop new systems or to oversee outsourced development. The report recommends creating public research institutions in cooperation with universities or public research centers to teach, train, and advise civil servants. Moreover, these institutions should also be created at the EU level to help the member states. Strengthen civil society’s involvement in ADM The report states that there is a lack of civil society engagement and expertise in the field of automated decision making even in some large Member States. As per the report, civil society organizations should assess the consequences of ADM as a specific and relevant policy field in their countries and strategies to address these challenges.  Also, grant-making organizations should develop funding calls and facilitate networking opportunities, along with governments making public funds available to civil society interventions. Don’t look at only data protection for regulatory ideas The report mentions how Article 22 of the General Data Protection Regulation (GDPR) has been under a lot of controversies. According to Article 22, Automated individual decision-making, including profiling, “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Many people have developed a consensus around it, saying that it’s limited and that the use cases of ADM systems cannot be regulated by data protection. It talks about the importance of discussion around developing governance tools and states that stakeholders should look at creative applications of existing regulatory frameworks such as equal-pay regulation. This would further help them address new challenges such as algorithmically controlled platform work (Gig Economy) and explore the new avenues for regulation of the effects of ADM. Need for a wide range of stakeholders (including civil liberty firms) to develop criteria for good design processes and audits The report mentions that on surveying some of the countries, they found out that governments claim that their strategies involve civil society stakeholders just to bring “diverse voices” to the table. But, the term civil society is not well defined and includes academia, groups of computer scientists or lawyers, think tanks, etc. This leads to important viewpoints getting missed since governments use this broad definition to show that ‘civil society’ is included despite them not being a part of the conversation. This is why it is critical that the organizations focused on rights be included in the debate. For more coverage, check out the official AlgorithmWatch report. 2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Unity introduces guiding Principles for ethical AI to promote responsible use of AI
Read more
  • 0
  • 0
  • 3070
article-image-conda-4-6-0-released-with-support-for-more-shells-better-interoperability-among-others
Amrata Joshi
30 Jan 2019
3 min read
Save for later

Conda 4.6.0 released with support for more shells, better interoperability among others

Amrata Joshi
30 Jan 2019
3 min read
On Monday, the team at Anaconda released a new version of Conda, an open source package management and environment management system that runs on Windows, macOS, and Linux. Conda 4.6.0 comes with support for more shells, better interoperability, improved Conda search and more. https://twitter.com/anacondainc/status/1089970114143965185 What’s new in Conda 4.6.0 Support for more shells Conda 4.6.0 comes with extensive initialization support so that more shells can use the conda activate command. It also comes with added support for PowerShell. This release comes with “conda init” functionality which gets Conda working quickly and less disruptively on a wide variety of shells such as zsh, bash, csh, fish, xonsh, and more. Improving interoperability with pip Conda 4.6.0 comes with added preview support for better interoperability. This release uses pip-installed packages to satisfy dependencies. It is possible to remove pip-installed software and replace them with Conda packages when appropriate.   Note: This feature is disabled by default right now because it can significantly impact Conda’s performance.  If you’d like to try it, you can set this condarc setting: conda config --set pip_interop_enabled True Activation of a single environment This release provides an ideal situation where a single environment can be active at any given time. Conda search gets better License and license_family have been added to MatchSpec for conda search. Enhanced fish shell This release features autocompletion for conda env to the fish shell. Major improvements This release comes with clean output for conda <COMMAND> --help and conda config --describe. In Conda 4.6.0, https://repo.anaconda.com/pkgs/pro has been removed from the default value for defaults. Reference to 'system' channel has been removed from this release. This release comes with http error body to debug information. Creating env name with space is no more supported. This release supports MatchSpec syntax in environment.yml files. With this release, the name of 'condacmd' dir has been changed to 'condabin'. It is now possible to disable timestamp prioritization when not needed. In this release, repodata has been cached as UTF-8 for handling unicode characters. Performance of Conda 4.6.0 has been improved to cache hash value on PackageRecord Major Changes In this release, 'conda env attach' and 'conda env upload' have been removed. This release comes with deprecation warnings for 'conda.cli.activate', 'conda.compat', and 'conda.install'. Env name with colon is now supported. In Conda 4.6.0, the default value of max_shlvl has been changed to 0. Non-user facing changes With OO inheritance, activate.py has been cleaned up. The pep8 project has been renamed to pycodestyle. This release comes with copyright headers. Bug fixes In the previous releases, the verify step of conda used to hang for a long time while installing a corrupted package. This has been fixed. In this release, the progress bar uses stderr instead of stdout. It is now possible to pin a list of packages by adding a file named ‘pinned’ to the conda-meta directory with a list of the packages that the user don’t want to update. To know more about Conda 4.6.0, check out the official announcement. Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements Share projects and environment on Anaconda cloud [Tutorial] Visualizing data in R and Python using Anaconda [Tutorial]  
Read more
  • 0
  • 0
  • 10590

article-image-rails-6-will-be-shipping-source-maps-by-default-in-production
Amrata Joshi
30 Jan 2019
3 min read
Save for later

Rails 6 will be shipping source maps by default in production

Amrata Joshi
30 Jan 2019
3 min read
The developer community surely owes respect to the innovation of ‘View Source’ as it had made things much easier for the coders. Well, David Heinemeier Hansson, the developer of Ruby on Rails have made a move to make programmers’ life easy by announcing that Rails 6 will be shipping source maps by default in production. Source maps help developers view code as it was written by the creator with comments, understandable variable names, and all the other help that makes it possible for programmers to understand the code. It is sent to users over the wire when users have the dev tools open in their browser. Source maps, so far, have been seen merely as a local development tool and not something that will be shipped to production. Live debugging would make things easier for the developers. According to the post by David Heinemeier Hansson, all the JavaScript that runs Basecamp 3 under Webpack now has source maps. David Heinemeier Hansson said, “We’re still looking into what it’ll take to get source maps for the parts that were written for the asset pipeline using Sprockets, but all our Stimulus controllers are compiled and bundled using Webpack, and now they’re easy to read and learn from.” David Heinemeier Hansson is also a partner at the web-based software development firm Basecamp. He said that 90% of all the code that runs Basecamp, is open source in the form of Ruby on Rails, Turbolinks, Stimulus. He further added, “I like to think of Basecamp as a teaching hospital. The care of our users is our first priority, but it’s not the only one. We also take care of the staff running the place, and we try to teach and spread everything we learn. Pledging to protect View Source fits right in with that.” Sam Saffron, the co-founder at Discourse said, “I just wanted to voice my support for bringing this back by @dhh . We have been using source maps at Discourse now for 4 or so years, including maps for both JS and SCSS in production, default on.” According to him one of the important reasons to enable source maps in production is that often JS frameworks have "production" and "development" modes. Sam Saffron said, “I have seen many cases over the years where a particular issue only happens in production and does not happen in development. Being able to debug properly in production is a huge life saver. Source maps are not the panacea as they still have some limitations around local var unmangling and other edge cases, but they are 100 times better than working through obfuscated minified code with magic formatting enabled.” According to Sam, there is one performance concern that is the cost of precompilation. The cost was minimal at Discourse but the cost for a large number of source maps is unpredictable. Users had discussed this issue on the GitHub thread, two years ago. According to most of them the precompile build times will be reduced. A user commented on Github, “well-generated source maps can actually make it very easy to rip off someone else's source.” Another comment reads, “Source maps are super useful for error reporting, as well as for analyzing bundle size from dependencies. Whether one chooses to deploy them or not is their choice, but producing them is useful.” Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing GitHub addresses technical debt, now runs on Rails 5.2.1 Introducing Web Application Development in Rails
Read more
  • 0
  • 0
  • 10943

article-image-uber-releases-aresdb-a-new-gpu-powered-real-time-analytics-engine
Natasha Mathur
30 Jan 2019
5 min read
Save for later

Uber releases AresDB, a new GPU-powered real-time Analytics Engine

Natasha Mathur
30 Jan 2019
5 min read
Uber announced the details regarding its new and open source real-time analytics engine, AresDB, yesterday. AresDB, released in November 2018, is Uber’s new solution for real-time analytics that helps unify, simplify, and improve Uber’s real-time analytics database solutions. It makes use of graphics processing units (GPUs) and an unconventional power source to help analytics grow at scale. AresDB’s architecture explores features such as column-based storage with compression (for storage efficiency), real-time ingestion with upsert support (for high data accuracy), and GPU powered query processing (for highly parallelized data processing powered by GPU). Let’s have a look at these key features of AresDB. Column-based storage AresDB stores data in a columnar format. Values in each column get stored as a columnar value vector. Nullness/validity of any values within the columns get stored in a separate null vector, where the validity of each of these values is represented by one bit. There are two types of stores, namely live and archive store, where column-based storage of data takes place. Live Store This is where all the uncompressed and unsorted columnar data (live vectors) gets stored. Data records in these live stores are then further partitioned into (live) batches of configured capacity. The values of each column within a batch are stored as a columnar vector. Validity/nullness of the values in each of these value vectors gets stored as a separate null vector, where the validity of each value is represented by one bit. Archive Store AresDB also stores all the mature, sorted, and compressed columnar data (archive vectors) in an archive store with the help of fact tables (stores an infinite stream of time series events). Records in archive store are also partitioned into batches similar to the live store. However, unlike the live batches, an archive batch is created that contains records of a particular Universal Time Coordinated (UTC) day. Records are then sorted as per the user configured column sort order. Real-time ingestion with upsert support Under the real-time ingestion feature, clients ingest data using the ingestion HTTP API by posting an upsert batch. Upsert batch refers to custom and serialized binary format that helps minimize the space overhead while also keeping the data randomly accessible. Real-time ingestion with upsert support After AresDB receives an upsert batch for ingestion, the upsert batch first gets written to redo logs for recovery. Then the upsert batch gets appended to the end of the redo log, where AresDB identifies and skips “late records” (where event time older than archived cut-off time) for ingestion into the live store. For records that are not “late,” AresDB uses the primary key index that helps locate the batch within the live store. During ingestion, once the upsert batch gets appended to the redo log, “late” records get appended to a backfill queue and other records are applied to the live store. GPU-powered query processing The user needs to use Ares Query Language (AQL) created by Uber to run queries against AresDB. AQL is a time series analytical query language that uses JSON, YAML, and Go objects. It is unlike the standard SQL syntax of SELECT FROM WHERE GROUP BY like other SQL-like languages. AQL offers better programmatic query experience as compared to SQL in JSON format for dashboard and decision system developers. This is because JSON format allows the developers to easily manipulate queries using the code, without worrying about issues such as SQL injection. AresDB manages multiple GPU devices with the help of a device manager that models the GPU device resources in two dimensions, namely, GPU threads and device memory. This helps track GPU memory consumption as processing queries. After query compilation is done, AresDB helps users estimate the number of resources required for the execution of a query. Device memory requirements need to be satisfied before a query is allowed to start. AresDB is currently capable of running either one or several queries on the same GPU device as long as the device is capable of satisfying all of the resource requirements. Future work AresDB is open sourced under the Apache License and is currently being used widely at Uber to power its real-time data analytics dashboards, helping it make data-driven decisions at scale. In the future, the Uber team wants to improve the project by adding new features. These new features include building the distributed design of AresDB to improve its scalability and reduce the overall operational costs. Uber also wants to add developer support and tooling to help developers quickly integrate AresDB into their analytics stack. Other features include expanding the feature set, and Query engine optimization. For more information, check out the official Uber announcement. Uber AI Labs introduce POET(Paired Open-Ended Trailblazer) to generate complex and diverse learning environments and their solutions Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 12042
article-image-san-francisco-legislation-proposes-a-citywide-ban-on-governments-use-of-facial-recognition-technology
Melisha Dsouza
30 Jan 2019
3 min read
Save for later

San Francisco legislation proposes a citywide ban on government’s use of facial recognition technology

Melisha Dsouza
30 Jan 2019
3 min read
San Francisco will be the first city in the U.S. to ban the government from using facial recognition technology, if a legislation that was tabelled yesterday, stands approved. The ‘Stop Secret Oversight Ordinance’, will be proposed by Supervisor Aaron Peskin. The legislation seeks to do the following: Departments in the city will have to seek approval from the Board of Supervisors before using or buying surveillance technology The legislation would implement annual audits of surveillance technology in order to ensure that the tools involved are properly used. A blanket ban will be issued that stops departments from purchasing or using facial recognition technology. The legislation, which would also apply to law enforcement, will be heard in committee next month. It has already obtained support from civil rights groups like the ACLU of Northern California. https://twitter.com/Matt_Cagle/status/1090421659754991616 The legislation also makes a strong point that ‘surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring’. The news comes at a time where facial recognition technology stands at the realms of debate between various privacy and security personnel. While the tech has been put to good use by various organizations, we can’t help but notice the negative impacts it can have on citizens. Take for instance the WEF 2019 talk that pointed out the role of governments and military in the use (and potential abuse) of today’s technology. Users on Twitter displayed reactions along the same line, expressing their relief that the government will no longer be able to “invade and micromanage the privacy of others.” Some also have called Facial recognition used by the government as ‘Dangerous’. https://twitter.com/jevanhutson/status/1090384142670258176 https://twitter.com/NicoleOzer/status/1090433222272417792 Head over to The Verge for more insights on this news. Four IBM facial recognition patents in 2018, we found intriguing Google won’t sell its facial recognition technology until questions around tech and policy are sorted Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report
Read more
  • 0
  • 0
  • 10178

article-image-mozilla-releases-firefox-65-with-support-for-av1-enhanced-tracking-protection-and-more
Bhagyashree R
30 Jan 2019
2 min read
Save for later

Mozilla releases Firefox 65 with support for AV1, enhanced tracking protection, and more!

Bhagyashree R
30 Jan 2019
2 min read
After releasing Firefox 64 last month, Mozilla released Firefox 65 yesterday. This version comes with support for AV1, enhanced tracking protection, improved browser experience for multilingual users, and more. Following are the updates Firefox 65 comes with: Enhanced tracking protection: This version comes with a new set of “redesigned controls”, which gives users the choice to decide their desired level of privacy protection. These controls (Standard, Strict, and Custom) are added under the newly-introduced Content Blocking section. Updated Language section: Now users can directly install multiple language packs and order language preferences for Firefox and websites, without having to download locale-specific versions. Support for AV1: Firefox 65 for Windows supports AV1, the royalty-free video compression technology developed and standardized by Alliance for Open Media (AOMedia). AV1 was created to provide users with affordable high-quality video compression technology. Support for the WebP image format: This version comes with support for the WebP image format, which provides the same image quality as the existing formats at smaller sizes. This saves bandwidth and speeds up page load, and hence improves performance and web compatibility. Updates for developers: A new Flexbox inspector tool is introduced, which shows details of Flexbox containers and helps debug Flex item sizes. The CSS changes made in the Rules panel will now be tracked in the new Changes tab. Support is added for the Storage Access API on desktop platforms. Along with these changes, Mozilla has also fixed several security flaws in this version. To read the full list of updates, check out the release notes of Firefox 65. Mozilla disables the by default Adobe Flash plugin support in Firefox Nightly 69 Mozilla releases Firefox 64 and Firefox 65 beta Mozilla shares why Firefox 63 supports Web Components
Read more
  • 0
  • 0
  • 12409
Modal Close icon
Modal Close icon