Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-user-discovers-bug-in-debian-stable-kernel-upgrade-armmp-package-affected
Melisha Dsouza
18 Feb 2019
3 min read
Save for later

User discovers bug in debian stable kernel upgrade; armmp package affected

Melisha Dsouza
18 Feb 2019
3 min read
Last week, Jürgen Löb, a Debian user, discovered a bug in the linux-image-4.9.0-8-armmp-lpae package of the Debian system. The version of the system affected is 4.9.144-3. The user states that he updated his Lamobo R1 board with apt update; apt upgrade. However, after the update, uboot was struck at "Starting kernel" with no further output after the same. The same issue was faced by him on Bananapi 1 board. He performed the following steps to recover his system: downgrading to a backup kernel by mounting the boot partition on the sd card. dd if=boot.scr of=boot.script bs=72 skip=1 (extract script) replaced the following command in boot.script: setenv fk_kvers '4.9.0-8-armmp-lpae' with setenv fk_kvers '4.9.0-7-armmp-lpae'  (backup kernel was available on his boot            partition) Then execute: mkimage -C none -A arm -T script -d boot.script boot.scr After performing these steps he was able to boot the system with the old kernel Version and restore the previous version (4.9.130-2) with the following command: dpkg -i linux-image-4.9.0-8-armmp-lpae_4.9.130-2_armhf.deb He cross-checked the issue and said that upgrading to 4.9.144-3 again after these steps results in the above unbootable behavior. Thus concluding, that the upgrade to 4.9.144-3 is causing the said problem. Timo Sigurdsson, another Debian user stated that “I recovered both systems by replacing the contents of the directories /boot/ and /lib/modules/ with those of a recent backup (taken 3 days ago). After logging into the systems again, I downgraded the package linux-image-4.9.0-8-armmp-lpae to 4.9.130-2 and rebooted again in order to make sure no other package upgrade caused the issue. Indeed, with all packages up-to-date except linux-image-4.9.0-8-armmp-lpae, the systems work just fine. So, there must be a serious regression in 4.9.144-3 at least on armmp-lpae”. In response to this thread, multiple users replied with other instances of broken packages, like plain armmp (non-lpae) is broken for Armada385/Caiman and QEMU's. Vagrant Cascadian, another user added to the list that all of the armhf boards running this kernel failed to boot, including: imx6: Cubox-i4pro, Cubox-i4x4, Wandboard Quad exynos5: Odroid-XU4 exynos4: Odroid-U3 rk3328: firefly-rk3288 sunxi A20: Cubietruck The Debian team has not reverted back with any official response. You can head over to the debian bugs page for more information on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 13549

article-image-cygwin-3-0-0-1-released
Prasad Ramesh
18 Feb 2019
3 min read
Save for later

Cygwin 3.0.0-1 released!

Prasad Ramesh
18 Feb 2019
3 min read
Last Saturday, Cygwin 3.0.0-1 was released. This major release brings support for new file systems, new tools, APIs and other changes. New features in Cygwin 3.0.0-1 Support is added for clocks like CLOCK_REALTIME_COARSE, CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME, CLOCK_REALTIME_ALARM, CLOCK_BOOTTIME_ALARM. Case sensitive directories are now supported. Directories are created automatically by the mkdir(2) command within the Cygwin installation as case sensitive from this release. For this feature to work, you need Windows 10 1803 or later with Windows Subsystem for Linux (WSL) installed. There are two new file input output controls called FS_IOC_GETFLAGS and FS_IOC_SETFLAGS. The actual inode flags are Cygwin-specific. These flags allow setting or resetting the DOS attributes, file sparseness, FS level encryption, and compression. They can also be used for modifying case sensitivity programmatically. There are two new tools namely chattr(1) and lsattr(1) to utilize setting and viewing the new input output controls on the command line. Support for the following has been added: exFAT Linux-specific open(2) flag O_PATH Linux-specific linkat(2) flag AT_EMPTY_PATH. The counter for posix timers (via timer_getoverrun() or siginfo_t::si_overrun) are overrun now The following New APIs have been added: signalfd timerfd_create timerfd_gettime timerfd_settime timer_getoverrun. fork(2) now can now recover from a situation when an in-use executable/dll is removed or replaced during process runtime. This behavior is disabled by default and limited to EXE and DLL files on the same NTFS partition as Cygwin. Changes in Cygwin 3.0.0-1 clock_nanosleep, pthread_condattr_setclock and timer_create now support all clocks with the exception of CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID. clock_setres is a no-op in Cygwin 3.0.0-1. Renaming a file to another name of an in-use file deletes the other file now. Previously, it was moved to the recycle bin. You can use the new POSIX rename semantics on the NTFS starting with Windows 10 1809. Now, open(..., O_TMPFILE) moves the file to trash immediately in order to free the parent directory. The wctype functions are updated to Unicode 11.0. The matherr, SVID, and X/Open math library configurations are removed. IEEE is the default math library configuration now. uname(2) is improved for newly built applications. Kerberos/MSV1_0 S4U authentication replaces creating a token from scratch and Cygwin LSA authentication package. To know about bug fixes etc, you can keep up with the Cygwin mailing list. GitHub launches draft pull requests Introducing RustPython, a Python 3 interpreter written in Rust .NET Core 3 Preview 2 is here!
Read more
  • 0
  • 0
  • 2737

article-image-google-chrome-developers-clarify-the-speculations-around-manifest-v3-after-a-study-nullifies-their-performance-hit-argument
Bhagyashree R
18 Feb 2019
4 min read
Save for later

Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument

Bhagyashree R
18 Feb 2019
4 min read
On Friday, a study was published on WhoTracks.me where the performance of the most commonly used ad blockers was analyzed. This study was motivated by the recent Manifest V3 controversy, which reveals that Google developers are planning to introduce an update that could lead to crippling all ad blockers. What update Chrome developers are introducing? The developers are planning to introduce an alternative to the webRequest API named the declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. The Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. One of the ad blocker maintainers have reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” What the study by Ghostery revealed? This study addresses the performance argument made by the developers. For this study, the Ghostery team analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz'z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. How Google developers reacted to this study and all the feedbacks surrounding Manifest V3? Following the publication of the study and after looking at the feedbacks, Devlin Cronin, a Software Engineer at Google, clarified that these changes are not really meant to prevent content blocking. Cronin added that the changes listed in Manifest V3 are still in the draft and design stage. In the Google group, Manifest V3: Web Request Changes, Cronin said, “We are committed to preserving that ecosystem and ensuring that users can continue to customize the Chrome browser to meet their needs. This includes continuing to support extensions, including content blockers, developer tools, accessibility features, and many others. It is not, nor has it ever been, our goal to prevent or break content blocking.” The team is not planning to remove the webRequest API. Cronin added, “In particular, there are currently no planned changes to the observational capabilities of webRequest (i.e., anything that does not modify the request).” Based on the feedback and concerns shared, the Chrome team did do some revisions including adding support for the dynamic rule to the declarativeNetRequest API. They are also planning to increase the ruleset size, which was 30k earlier. Users are, however, not convinced by this clarification. One user commented on Hacker News, “Keep in mind that their story about performance has been shown to be a complete lie. There is no performance hit from using webRequest like this. This is about removing sophisticated ad blockers in order to defend Google's revenue stream, plain and simple.” Coincidentally, a Chrome 72 upgrade seems to break ad blockers in a way that they can’t see or block analytics anymore if the web page uses a service worker. https://twitter.com/jviide/status/1096947294920949760 Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report Google announces the general availability of a new API for Google Docs
Read more
  • 0
  • 0
  • 13124

article-image-uk-lawmakers-publish-a-report-after-18-month-long-investigation-condemning-facebooks-disinformation-and-fake-news-practices
Sugandha Lahoti
18 Feb 2019
4 min read
Save for later

UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices

Sugandha Lahoti
18 Feb 2019
4 min read
It seems that the bad days for Facebook are never ending. Today, the Digital, Culture, Media and Sport Committee published its final report on Disinformation and ‘fake news. They have touted Facebook’s handling of personal data, and its use for political campaigns, as prime areas for inspection by regulators. This report has been published after UK Parliament committee spent more than 18 months of investigation into Facebook and its privacy practices. The interim report was published in July 2018 which offered the UK government a number of recommendations. The final report offers more recommendations as well as repeats recommendations. The interim report developed a code of ethics, which all tech companies should agree to uphold. For the final report, the members of MP have recommended that platforms should be subject to a Compulsory Code of Ethics that would be overseen by an independent regulator. The companies which fail to meet rules on harmful or illegal content would face hefty fines. The committee was severely critical of Facebook, condemning Mark Zuckerberg for failing to answer the members’ questions, “By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and  the‘International Grand Committee’, involving members from nine legislatures from around the world.”, writes the report. Damian Collins MP, Chair of the DCMS Committee said, “Even if Mark Zuckerberg doesn’t believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my Committee shows he still has questions to answer yet he’s continued to duck them, refusing to respond to our invitations directly or sending representatives who don’t have the right information.” In December 2018, the committee published a report of Facebook internal documents, including e-mails sent between CEO Mark Zuckerberg and other senior executives regarding a company called six4three. The documents revealed that Facebook monetized their valuable user data, allowing apps to use Facebook to grow their network, as long as it increased usage of Facebook, strict limits on possible competitor access and much more. For the final report, the committee has published more evidence from the Six4Three documents. Per the report, this demonstrates “Facebook's aggressive action against certain apps and highlights the link between Friends' data and the financial value of the developers' relationship with Facebook.” Facebook was also condemned for its Russian meddling in elections. The committee has urged the Government to make a statement about the number of investigations being carried out into Russian interference in UK politics. Facebook and other social media platforms should be clear that they have a responsibility to comply with the law and not facilitate illegal activity such as foreign influence, disinformation, funding, voter manipulation and the sharing of data. To summarize, the DCMS committee calls for: Compulsory Code of Ethics for tech companies overseen by independent regulator Regulator given powers to launch legal action against companies breaching code Government to reform current electoral communications laws and rules on overseas involvement in UK elections Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation “We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee concludes. You can go through the full report here. Facebook and the U.S. government are negotiating over Facebook’s privacy issues Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report German regulators put a halt to Facebook’s data gathering activities and ad business
Read more
  • 0
  • 0
  • 9962

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 17414

article-image-amazon-wont-be-opening-its-hq2-in-new-york-due-to-public-protests
Prasad Ramesh
15 Feb 2019
3 min read
Save for later

Amazon won’t be opening its HQ2 in New York due to public protests

Prasad Ramesh
15 Feb 2019
3 min read
Yesterday, Amazon released an official statement saying that they will not be opening their second headquarters which was planned for Long Island City, Queens, New York. The initial finalization to create HQ2 was done in November last year with a promise that opening of the New York headquarters would have generated 25,000 jobs. The deal was negotiated by Gov. Andrew Cuomo. But this decision had stirred public protest against using $3 billion of corporate welfare funds to open Amazon’s HQ2. Senator Mike Gianaris was also against the idea of using the funds to open HQ2 among others. Due to prolonged protests since the announcement over 3 months, Amazon decided to withdraw their plans of opening HQ2 in Long Island City, Queens, New York. “A number of state and local politicians have made it clear that they oppose our presence and will not work with us to build the type of relationships that are required to go forward with the project we and many others envisioned in Long Island City”. Even though many news outlets quoted a Siena College poll that showed the majority of New Yorkers support the idea, the total number of people who participated were 778 throughout the state. Meaning the number was much smaller from Queens city. The poll wasn’t a good representation of the general sentiment of Queens residents which were taken to the streets in protests against HQ2. The wide negative backlash from the locals made Amazon change its plans. This decision was upsetting for both Gov. Andrew M. Cuomo and Mayor Bill de Blasio as it damages their efforts to bring HQ2 in Long Island City. While some rejoice in Amazon’s decision, others are sad about it. https://twitter.com/Neil_Irwin/status/1096094205477380097 Congresswoman Alexandria Ocasio-Cortez seemed very happy about this victory: https://twitter.com/AOC/status/1096117499492478977 Some are happy that this ‘national embarrassment’ did not happen: https://twitter.com/wesmckinn/status/1096127938297253888 The protests went beyond just the HQ2 deal and targeted Amazon’s other business policies such as anti-union stance and providing tech support for ICE. https://twitter.com/jdavidgoodman/status/1096144088234119169 Amazon says that they do not intend to search for an HQ2 site currently. They intend to proceed with their Northern Virginia and Nashville plans. They’ll also be hiring across their 17 corporate offices in USA and Canada. You can also check out this article by Business Insider that details on what happened to Seattle after Amazon HQ1. You can read Amazon’s statement at the Amazon blog. Amazon faces increasing public pressure as HQ2 plans go under the scanner in New York Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative Amazon increases the minimum wage of all employees in the US and UK
Read more
  • 0
  • 0
  • 12195
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-you-can-replace-a-hot-path-in-javascript-with-webassembly
Bhagyashree R
15 Feb 2019
5 min read
Save for later

How you can replace a hot path in JavaScript with WebAssembly

Bhagyashree R
15 Feb 2019
5 min read
Yesterday, Das Surma, a Web Advocate at Google, shared how he and his team replaced a JavaScript hot path in the Squoosh app with WebAssembly. Squoosh is an image compression web app which allows you to compress images with a variety of codecs that have been compiled from C++ to WebAssembly. Hot paths are basically code execution paths where most of the execution time is spent. With this update, they aimed to achieve predictable performance across all browsers. Its strict typing and low-level architecture enable more optimizations during compilation. Though JavaScript can also achieve similar performance to WebAssembly, it is often difficult to stay on the fast path. What is WebAssembly? WebAssembly, also known as Wasm, provides you with a way to execute code written in different languages at near-native speed on the way. It is a low-level language with a compact binary format, which provides C/C++/Rust as the compilation target so that they can run on the web. When you compile a C or Rust code to WebAssembly, you get a .wasm file. This file contains something called “module declaration”. In addition to the binary instructions for the functions contained within, it contains all the imports the module needs from its environment and a list of exports this module provides to the host. Comparing the file size generated To narrow down the language, Surma gave an example of a JavaScript function that rotates an image by multiples of 90 degrees. This function basically iterates over every pixel of an image and copies it to a different location. This function was written in three different languages, C/C++, Rust, AssemblyScript, and was compiled to WebAssembly. C and Emscripten Emscripten is a C compiler that allows you to easily compile your C code to WebAssembly. After porting the entire JavaScript code to C and compiling it with emcc, Emscripten creates a glue code file called c.js and wasm module called c.wasm. The wasm module gzipped to almost 260 bytes and the c.js file was of the size 3.5 KB. Rust Rust is a programming language syntactically similar to C++. It is designed to provide better memory and thread-safety. The Rust team has introduced various tooling to the WebAssembly ecosystem, and one of them is wasm-pack. With the help of wasm-pack, developers can turn their code into modules that work out-of-the-box with bundlers like Webpack. After compiling the Rust code using wasm-pack, a 7.6 KB wasm module was generated with about 100 bytes of glue code. AssemblyScript AssemblyScript compiles a strictly-typed subset of TypeScript to WebAssembly ahead of time. It uses the same syntax as TypeScript but switches the standard library with its own. This essentially means that you can’t just compile any TypeScript to WebAssembly, but you don’t have to learn a new programming language to write WebAssembly. After installing the AssemblyScript file, with the help of the AssemblyScript/assemblyscript npm package, AssemblyScript provides with a wasm module of at least 300 bytes and no glue code. The module can directly work with vanilla WebAssembly APIs. Comparing the size of files generated by compiling the above three languages, Rust gave the biggest file. Comparing the performance To analyze the performance, the team did speed comparison per language and speed comparison per browser. They shared the results in the following two graphs: Source: Google Developers The graphs show that all the WebAssembly modules were executed in ~500ms or less, which proves that WebAssembly gives a predictable performance. Regardless of which language you choose, the variance between browsers and languages is minimal. The standard deviation of JavaScript across all browsers is ~400ms. And, the standard deviation of all our WebAssembly modules across all browsers is ~80ms. Which language you should choose if you have a JS hot path and want to make it faster with WebAssembly? Looking at the above results, the best choice seems to be C or AssemblyScript, but they decided to go with Rust. They narrowed down to Rust because all the codecs shipped in Squoosh so far are compiled using Emscripten and the team wanted to broaden their knowledge about the WebAssembly ecosystem by using a different language. They did not choose AssemblyScript because it is relatively new and the compiler is not as mature as Rust. The file size difference between Rust and other languages were quite huge but in reality, this is not a big deal. Going by the runtime performance, Rust showed a faster average across browsers than AssemblyScript. Additionally, Rust will be more likely to produce faster code without requiring any manual code optimizations. To read more in detail, check out Surma’s post on Google Developers. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Creating and loading a WebAssembly module with Emscripten’s glue code [Tutorial] The elements of WebAssembly – Wat and Wasm, explained [Tutorial]
Read more
  • 0
  • 0
  • 9402

article-image-gnu-health-federation-message-and-authentication-server-drops-mongodb-and-adopts-postgresql
Melisha Dsouza
15 Feb 2019
2 min read
Save for later

GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL

Melisha Dsouza
15 Feb 2019
2 min read
Just after RedHat announced its plans to drop MongoDB from its Satellite system management solution because of it being licensed under SSPL, GNU has followed suit. Earlier this week, GNU announced its plans move its GNU Health Federation message and authentication server -Thalamus- from MongoDB to PostgreSQL. As listed on the post, the main reason for this switch is because MongoDB decided to change the license of the server to their Server Side Public License (SSPL). Because of this decision, many GNU/Linux distributions are no longer including the Mongodb server. In addition to these reasons, GNU expresses their concerns that even the organizations like the OSI and Free Software Foundation are showing their reluctance to accept this idea. Adding to this hesitation of accepting the license; rejection from a large part of the Libre software community and the immediate end of support from GPL versions of MongoDB has lead to the adoption of PostgreSQL for Thalamus. Dr. Luis Falcon, President of GNU Solidario says that one of the many reasons for choosing PostgreSQL was its JSON(B) support that  provides the flexibility and scalability found in document oriented engines. The upcoming thalamus server will be designed to support PostgreSQL. To stay updated with further progress on this announcement, head over to the GNU blog. GNU Bison 3.3  released with major bug fixes, yyrhs and yyphrs tables, token constructors and more GNU ed 1.15 released! GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation  
Read more
  • 0
  • 0
  • 14891

article-image-the-indian-government-proposes-to-censor-social-media-content-and-monitor-whatsapp-messages
Prasad Ramesh
15 Feb 2019
3 min read
Save for later

The Indian government proposes to censor social media content and monitor WhatsApp messages

Prasad Ramesh
15 Feb 2019
3 min read
The Indian Government seeks to censor big internet services in the country, according to a New York Times report. They also want to monitor WhatsApp messages to trace back to an original message. Earlier this month India had passed a rule on foreign direct investment that restricts how large e-commerce companies operate in the country. Remove content the Indian govt deems inappropriate If this censorship proposal is passed, the Indian government authorities can make companies like Facebook, Google, Twitter, and TikTok remove content, which they think is inappropriate or hateful. It was speculated that such changes violate freedom of free speech and would make India autocratic like China. Large tech companies are fighting back this proposal. Apar Gupta, executive director of the Internet Freedom Foundation said to NYT: “The proposed changes have an authoritarian bent. This is very similar to what China does to its citizens, where it polices their every move and tracks their every post on social media.” The Indian parliamentary standing committee has also summoned Twitter CEO Jack Dorsey or any other senior member from the global team. They want WhatsApp to break its encryption The Indian govt also seeks access to messages on Facebook’s WhatsApp. The messaging platform is widely used in the country and is also used by people to spread disinformation, pornography, and hateful content. With access to WhatsApp messages, the government could track back t the original source of the hateful/inappropriate content. As listed in Bloomberg, Carl Woog, WhatsApp head of communications said: “What is contemplated by the rules is not possible today given the end-to-end encryption that we provide and it would require us to re-architect WhatsApp, leading us to a different product, one that would not be fundamentally private”. But WhatsApp also bans 250,000 accounts every month for sharing inappropriate content involving children. If the rules come into effect, WhatsApp will have to comply with govt officials in tracing the source of criminal activity on WhatsApp. Public reactions A Hacker new user points out: “Yet another example where "think of the children" is abused to crack down on human rights. India's government should stop going after messaging apps and rather try to find out the root causes of the problems they see: lynchings and brutal sexual violence have nothing to do with Whatsapp, they're indicators of a widespread cultural problem.” Big tech companies like Facebook and Google flourished in India which has a fast-growing digital population. Now India joins other countries and is considering enacting rules that would restrict activity on digital platforms. Google faces pressure from Chinese, Tibetan, and human rights groups to cancel its censored search engine, Project DragonFly Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 12623

article-image-github-launches-draft-pull-requests
Amrata Joshi
15 Feb 2019
3 min read
Save for later

GitHub launches draft pull requests

Amrata Joshi
15 Feb 2019
3 min read
Yesterday, GitHub launched a new feature named draft pull requests, which allows users to start a pull request before they are done implementing all the code changes. Users can start a conversation with their collaborators once the code is ready. If a user ends up closing the pull request for some reason or is refactoring the code entirely, the pull request would work in collaboration. Also, if a user wants to signal a pull request to be the start of the conversation and the code isn’t ready, users can still let the people check it out locally and get feedback. The draft pull requests feature can tag users if they are still working on a PR and notify the team once it’s ready. This feature will also help the pull requests that are prematurely closed, or for times when users start working on a new feature and forget to send a PR. When a user opens a pull request, a drop-down arrow appears next to the ‘Create pull request’ button. Users can toggle the drop-down arrow for creating a draft. A draft pull request is differently styled for indicating that it is in a draft state. Users can change the status to ‘Ready for review’ near the bottom of the pull request for removing the draft state and allow merging according to the project’s settings. In case a user has ‘CODEOWNERS file’ in their repository, a draft pull request will suppress notifications to those reviewers until it is marked as ready for review. Users have given mixed reviews to this news. According to a few users, this new feature will save up a lot of time. One of the users said, “It saves a lot of wasted effort by exploring the problem domain collaboratively before development begins.” While according to a few others this idea is not much effective. Another comment reads, “Someone suggested this on my team. I personally don’t like the idea because these policies often times lead to bureaucracy and then nothing getting released. It is not that I am against thinking ahead but if I have to in details explain everything I do, then more time is spent documenting than actually creating which is the part I enjoy.” To know more about this news, check out GitHub official post. Western Digital RISC-V SweRV Core is now on GitHub GitHub Octoverse: top machine learning packages, languages, and projects of 2018 Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges
Read more
  • 0
  • 0
  • 11031
article-image-facebook-and-the-u-s-government-are-negotiating-over-facebooks-privacy-issues
Amrata Joshi
15 Feb 2019
2 min read
Save for later

Facebook and the U.S. government are negotiating over Facebook’s privacy issues

Amrata Joshi
15 Feb 2019
2 min read
Facebook has been in news for its data breaches and its data sharing practices since quite some time now. Last month, advocacy groups such as Open Market Institute, Color of Change, and the Electronic Privacy Information Center among others, wrote to the Federal Trade Commission, requesting the government to intervene into how Facebook operates. The letter included a list of actions that the FTC could take which including the multibillion-dollar fine and changing the company’s hiring practices. The advocacy group wrote the FTC, “The record of repeated violations of the consent order can no longer be ignored. The company’s (Facebook’s) business practices have imposed enormous costs on the privacy and security of Americans, children, and communities of color, and the health of democratic institutions in the United States and around the world.” According to today’s report by Washington Post, the U.S. government and Facebook are negotiating a settlement over Facebook’s privacy issues that could require the company to pay a multibillion-dollar fine. FTC has been investigating on the revelations on Facebook’s Cambridge Analytica scandal. The investigation is based on whether the sharing of data with Cambridge Analytica and other privacy disputes violated a 2011 agreement with the FTC. According to the Washington Post, the US Federal Trade Commission (FTC) and Facebook haven’t yet agreed on the amount. Facebook has reported  $16.9 billion as its fourth-quarter revenue and a profit of $6.9 billion. An eventual settlement might also incorporate a few changes in how Facebook does business. Currently, Facebook has declined to comment as per the Washington Post report. Facebook’s spokeswoman said, “We have been working with the FTC and will continue to work with the FTC.” To know more about this news, check out the official report by Washington Post. Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports
Read more
  • 0
  • 0
  • 7466

article-image-digital-ocean-announces-managed-databases-for-postgresql
Savia Lobo
15 Feb 2019
3 min read
Save for later

Digital Ocean announces ‘Managed Databases for PostgreSQL’

Savia Lobo
15 Feb 2019
3 min read
Yesterday, the team at Digital Ocean, a fully managed and feature-rich database service provider, announced the ‘Managed Databases for PostgreSQL’ as a Valentine gift for the users. The new Managed Databases along with the PostgreSQL support will allow developers to quickly build a scalable, high-performance database cluster with less hassle. One of the interesting features of this new provision is that one need not know anything about the Linux operating system or specific DevOps maintenance tasks. Managed databases take care of some challenges including: Help to identify the optimal database infrastructure footprint Scale infrastructure while business and data requirements grow Help in designing and managing highly available infrastructure and failover processes Implement a complete and reliable backup and recovery strategy Aid in forecasting and maintaining operational infrastructure costs The team at Digital Ocean writes, “You’ll enjoy simple, predictable pricing that allows you to control your costs. Spin up a database node starting from $15 per month or high availability cluster from $50 per month. Backups are included for free with your service to keep things simple. Ingress bandwidth is always free, and egress fees ($0.01/GB per month) will be waived for 2019.” Benefits of Managed Databases A hassle-free database maintenance Managed databases save a lot of time. All the user has to do is, quickly deploy a database, and the databases handle the rest. Users do not have to worry about security patches to the OS or database engine--once a new version or patch is available, just a simple click can enable it. Highly secure and optimized for performance All data in these newly managed databases is encrypted at rest and in transit. One can use the Cloud Firewall to restrict connections to their respective database. The database runs on enterprise-class VM hardware with local SSD storage, thus, giving the user a lightning-fast performance. Easy scalability With Managed Databases, users can scale up at any time without impacting their application, virtually. One can spin up read-only nodes to scale read operations or remove compute overhead from reporting requirements. Automatic failovers If any issue occurs with the primary node, traffic will automatically get routed to the standby nodes. The team at Digital ocean recommends selecting a high-availability option to minimize the impact in case of a failure. Simple and reliable backup and recovery solution Backups are handled automatically and free of cost. Full backups are taken every day and write-ahead-logs are maintained to allow users to restore to any point-in-time during the retention period. To know more about these new Managed Databases, visit the Digital Ocean website. Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records Google Cloud Firestore, the serverless, NoSQL document database, is now generally available 2018 is the year of graph databases. Here’s why.
Read more
  • 0
  • 0
  • 10564

article-image-unreal-engine-4-22-update-support-added-for-microsofts-directx-raytracing-dxr
Melisha Dsouza
15 Feb 2019
3 min read
Save for later

Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR)

Melisha Dsouza
15 Feb 2019
3 min read
On 12th February, Epic Games released a preview build of Unreal Engine 4.22, and a major upgrade among numerous other features and fixes is the support for real-time ray tracing and path tracing. The new build will extend its preliminary support for Microsoft's DirectX Ray-tracing (DXR) extensions to the DirectX 12 API. Developers can now try their hands at ray-traced games developed through Unreal Engine 4. There are very limited games that support raytracing. Currently, only  Battlefield V (Ray Traced Reflections) and Metro Exodus (Ray Traced Global Illumination) feature ray tracing effects, which are developed in the proprietary Frostbite 3 and 4A Game Engines. [box type="shadow" align="" class="" width=""]Fun Fact: Ray tracing is a much more advanced and lifelike way of rendering light and shadows in a scene. Movies and TV shows use this to create and blend in amazing CG work with real-life scenes leading to more life-like, interactive and immersive game worlds with more realistic lighting, shadows, and materials.[/box] The patch notes released by the team states that they have added low level support for ray tracing: Added ray tracing low-level support. Implemented a low-level layer on top of UE DirectX 12 that provides support for DXR and allows creating and using ray tracing shaders (ray generation shaders, hit shaders, etc) to add ray tracing effects. Added high-level ray tracing features Rect area lights Soft shadows Reflections Reflected shadows Ambient occlusion RTGI (ray traced global illumination) Translucency Clearcoat IBL Sky Geometry types Triangle meshes Static Skeletal (Morph targets & Skin cache) Niagara particles support Texture LOD Denoiser Shadows, Reflections, AO Path Tracert Unbiased, full GI path tracer for making ground truth reference renders inside UE4. According to HardOCP,  the feature isn't technically tied to Nvidia RTX but since turing cards are the only ones with driver support for DirectX Raytracing at the moment, developers need an RTX 2000 series GPU to test out Unreal's Raytracing. There has been much debate about the RTX offered by NVIDIA in the past. While the concept did sound interesting at the beginning, very few engines adopted the idea- simply because previous generation processors cannot support all the features of NVIDIA’s RTX. Now, with DXR in the picture, It will be interesting to see the outcome of games developed using ray tracing. Head over to Unreal Engine’s official post to know more about this news. Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial] Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 0
  • 20091
article-image-openais-new-versatile-ai-model-gpt-2-can-efficiently-write-convincing-fake-news-from-just-a-few-words
Natasha Mathur
15 Feb 2019
3 min read
Save for later

OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words

Natasha Mathur
15 Feb 2019
3 min read
OpenAI researchers demonstrated a new AI model, yesterday, called GPT-2, that is capable of generating coherent paragraphs of text without needing any task-specific training. In other words, give it the first line of a story, and it’ll form the rest. Apart from generating articles, it can also perform rudimentary reading comprehension, summarization, machine translation, and question answering.   GPT-2 is an unsupervised language model comprising 1.5 billion parameters and is trained on a dataset of 8 million web pages. “GPT-2 is simply trained to predict the next word in a 40GB of internet tex”, says the OpenAI team. The OpenAI team states that it is superior to other language models trained on specific domains (like Wikipedia, news, or books) as it doesn’t need to use these domain-specific training datasets. For languages related tasks such as question answering, reading comprehension, and summarization, GPT-2 can learn these tasks directly from the raw text and doesn’t require any training data. The OpenAI team states that the GPT-2 model is ‘chameleon-like’ and easily adapts to the style and content of the input text. However, the team has observed certain failures in the model such as repetitive text, world modeling failures, and unnatural topic switching. Finding a good sample depends on the familiarity of the model with that sample’s context. For instance, when the model is prompted with topics that are ‘highly represented in data’ like Miley Cyrus, Lord of the rings, etc, it is able to generate reasonable samples 50% of the time. On the other hand, the model performs poorly in case of highly technical or complex content. The OpenAI team has specified that it envisions the use of GPT-2 in development of AI writing assistants, advanced dialogue agents, unsupervised translation between languages and enhanced speech recognition systems. It has also specified the potential misuses of GPT-2 as it can be used to generate misleading news articles, and automate the large scale production of fake and phishing content on social media. Due to the concerns related to this misuse of language generating models, OpenAI has decided to release a ‘small’ version of GPT-2  with its sampling code and a research paper for researchers to experiment with. The dataset, training code, or GPT-2 model weights have been excluded from the release. The OpenAI team states that this release strategy will give them and the overall AI community the time to discuss more deeply about the implications of such systems. It also wants the government to take initiatives to monitor the societal impact of AI technologies and to track the progress of capabilities in these systems. “If pursued, these efforts could yield a better evidence base for decisions by AI labs and governments regarding publication decisions and AI policy more broadly”, states the OpenAI team. Public reaction to the news is positive, however, not everyone is okay with OpenAI’s release strategy, and feels that the move signals towards ‘closed AI’ and propagates the ‘fear of AI’: https://twitter.com/chipro/status/1096196359403712512 https://twitter.com/ericjang11/status/1096236147720708096 https://twitter.com/SimonRMerton/status/1096104677001842688 https://twitter.com/AnimaAnandkumar/status/1096209990916833280 https://twitter.com/mark_riedl/status/1096129834927964160 For more information, check out the official OpenAI GPT-2 blog post. OpenAI charter puts safety, standards, and transparency first OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners OpenAI builds reinforcement learning based system giving robots human like dexterity
Read more
  • 0
  • 0
  • 22308

article-image-reddits-2018-transparency-report-includes-copyright-removals-restorations-and-more
Savia Lobo
14 Feb 2019
4 min read
Save for later

Reddit’s 2018 Transparency report includes copyright removals, restorations, and more!

Savia Lobo
14 Feb 2019
4 min read
Yesterday, the Reddit community released the Transparency Report for the year 2018. The report includes additional information on copyright removals, restorations, and retractions as well as removals for violations of Reddit’s Content Policy and subreddit rules. In 2018, Reddit received more than half of governmental requests--around 310. Reddit carefully reviewed each request for compliance with legal standards and followed the procedure. Of the 752 requests submitted by governmental entities: 171 were requests to preserve user account information; and 581 were requests to produce user account information. According to the report, “In 2018, Reddit received 171 preservation requests, a 116% increase over the 79 preservation requests received in 2017. Reddit complied with 91% of the preservation requests received.” Source: Reddit report In 2018, Reddit received 752 requests for the preservation or production of user account information from governmental entities. Reddit carefully reviewed each request for compliance with legal standards and followed the procedure described in Reddit’s Privacy Policy. Reddit also sometimes receives a request from a governmental entity to produce information. On receiving such a request, Reddit reviews the request to ensure it is consistent with ECPA and that is otherwise legally valid. In 2018, Reddit received a total of 581 requests to produce user account information from both United States and foreign governmental entities. This represents a 151% increase compared to the number received in 2017. Source: Reddit report Reddit received 319 non-emergency pieces of legal process from United States governmental entities seeking the production of user account information such as subpoenas, court orders, and search warrants. It also received 28 requests for the production of user account information from foreign governmental authorities (excluding emergency requests). It also received a total of 234 Emergency Disclosure Requests, globally. Reddit disclosed user account information in response to 162 (69%) of these requests. Along with governmental requests, Reddit also received 15 requests for private user information from non-governmental entities. This is an increase from the 5 non-governmental requests received in 2017. Reddit also received content removal requests from: governmental entities and other civil legal demands, for reasons such as alleged violations of local laws; copyright owners regarding alleged copyright infringement; and users or Reddit administrators regarding violations of Reddit’s Content Policy. One of the request to remove content from a government entity in the US which had nothing to do with copyright. “The request was for the removal of an image and a large volume of comments made underneath it for potential breach of federal law," the report says. "As the governmental entity did not provide sufficient context regarding how the image violated the law, did not provide Reddit with valid legal process compelling removal, and the request to remove the entire post as well as the comment thread appeared to be overbroad, Reddit did not comply with the request." According to the report, prior to 2018, each piece of content that was requested to be removed was counted as a distinct DMCA notice. This resulted in the DMCA notice numbers reported in previous Transparency Reports (i.e. in 2016 = 3,294 “notifications”, and in 2017 = 7,825 “notifications”). The report states, “The number of notices Reddit received in 2018 more than tripled from the 3,130 DMCA notices (and the 7,825 removal requests) received in 2017, and increased by over 8 times from the 1,155 notices (and 3,294 removal requests) received in 2016.” Source: Reddit report When asked about the Reddit’s 2018 transparency report, Reddit’s CEO Steve Huffman said, “This year, we expanded the report to included details on two additional types of content removals: those taken by us at Reddit, Inc., and those taken by subreddit moderators (including Automod actions). We remove content that is in violation of our site-wide policies, but subreddits often have additional rules specific to the purpose, tone, and norms of their community. You can now see the breakdown of these two types of takedowns for a more holistic view of company and community actions.” To know more about the report in detail, read Reddit’s Transparency report 2018. Reddit has raised $300 million in a new funding round led by China’s Tencent Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’ Reddit posts an update to the FireEye’s report on suspected Iranian influence operation
Read more
  • 0
  • 0
  • 7491
Modal Close icon
Modal Close icon