Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-facebook-releases-deepfocus-an-ai-powered-rendering-system-to-make-virtual-reality-more-real
Natasha Mathur
20 Dec 2018
3 min read
Save for later

Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real

Natasha Mathur
20 Dec 2018
3 min read
Facebook released a new “AI-powered rendering system”, called DeepFocus yesterday, that works with Half Dome, a special prototype headset that Facebook’s Reality Lab (FRL) team had been working on over the past three years. HalfDome is an example of a "varifocal" head-mounted display (HMD) that comprises eye-tracking camera systems, wide-field-of-view optics, and adjustable display lenses that move forward and backward to match your eye movements. This makes the VR experience a lot more comfortable, natural, and immersive. However, HalfDome needs software to work in its full potential, that is where DeepFocus comes into the picture. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry,” mentions Marina Zannoli, a vision scientist at FRL. Facebook is also open-sourcing DeepFocus, making the system’s code and the data set used to train it available to help other VR researchers incorporate it into their work. “By making our DeepFocus source and training data available, we’ve provided a framework not just for engineers developing new VR systems, but also for vision scientists and other researchers studying long-standing perceptual questions,” say the researchers. https://www.youtube.com/watch?v=Xp6OlfJEmAo DeepFocus A research paper presented at SIGGRAPH Asia 2018 explains that DeepFocus is a unified rendering and optimization framework based on convolutional neural networks that solve a full range of computational tasks. It also helps with enabling real-time operation of accommodation-supporting HMDs. The CNN comprises “volume-preserving” interleaving layers that help it quickly figure out the high-level features within an image. For instance, the paper mentions, that it accurately synthesizes defocus blur, focal stacks, multilayer decompositions, and multiview imagery. Moreover, it makes use of only commonly available RGB-D images, that enable real-time, near-correct depictions of a retinal blur. Researchers explain that DeepFocus is  “tailored to support real-time image synthesis..and ..includes volume-preserving interleaving layers..to reduce the spatial dimensions of the input, while fully preserving image details, allowing for significantly improved runtimes”. This model is more efficient unlike the traditional AI systems used for deep learning based image analysis as DeepFocus can process the visuals while preserving the ultrasharp image resolutions that are necessary for delivering high-quality VR experience. The researchers mention that DeepFocus can also grasp complex image effects and relations that includes foreground and background defocusing. However, DeepFocus isn’t just limited to Oculus HMDs. Since DeepFocus supports high-quality image synthesis for multifocal and light-field display, it is applicable to a complete range of next-gen head-mounted display technologies. “DeepFocus may have provided the last piece of the puzzle for rendering real-time blur, but the cutting-edge research that our system will power is only just beginning”, say the researchers. For more information, check out the official Oculus Blog.  Magic Leap unveils Mica, a human-like AI in augmented reality MagicLeap acquires Computes Inc to enhance spatial computing Oculus Connect 5 2018: Day 1 highlights include Oculus Quest, Vader Immortal and more!
Read more
  • 0
  • 0
  • 16325

article-image-microsoft-urgently-releases-out-of-band-patch-for-an-active-internet-explorer-remote-code-execution-zero-day-vulnerability
Melisha Dsouza
20 Dec 2018
3 min read
Save for later

Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability

Melisha Dsouza
20 Dec 2018
3 min read
Yesterday, Microsoft released an out-of-band patch for a vulnerability discovered in the Internet Explorer that attackers are actively exploiting on the Internet. The IE zero-day can allow an attacker to execute malicious code on a user's computer. The vulnerability has been assigned ID CVE-2018-8653 and the security update is released as KB4483187; titled "Cumulative security update for Internet Explorer: December 19, 2018". It is available for Internet Explorer 11 on Windows 10, Windows 8.1, and Windows 7 SP1, Internet Explorer 10 on Windows Server 2012, and Internet Explorer 9 on Windows Server 2008. Microsoft has acknowledged Clement Lecigne of Google’s Threat Analysis Group for reporting the exploitation of this Internet Explorer vulnerability. Apart from the security advisory released yesterday, neither Microsoft or Google has shared any details about the attacks involving the flaw. Vulnerability Details According to Microsoft's security advisory, the remote code execution vulnerability was found in IE’s memory handling in Jscript.dll.  An attacker could corrupt IE’s memory to allow code execution on the affected system. The attacker could convince a user to visit a malicious website, which could then exploit this vulnerability, executing code on the user’s local machine. After exploiting the vulnerability, the attackers would be able to perform commands on the victim's system such as downloading further malware, scripts, or executing any command that the currently logged in user has access to. The issue can also be exploited through applications that embed the IE scripting engine to render web-based content such as the apps part of the Office suite. According to Microsoft, the attacker will get code execution rights under the same privileges the victims have. If the victim is using an account with limited access, the damage can be contained to simple operations, however, in case of a user having administrator rights, the attacker can increase the scope of the damage done. Mitigations and Workarounds According to ZDNet, in the previous four months, Microsoft has patched four other zero-days. All these zero-days allow an "elevation of privilege". This means that if a victim has missed any of the previous four Windows Patch Tuesday patches, an attacker can chain the IE zero-day with one of the previous zero-days (CVE-2018-8611, CVE-2018-8589, CVE-2018-8453, CVE-2018-8440) to gain SYSTEM-level access, and take over a targeted computer. Microsoft has assured customers who have Windows Update enabled and have applied the latest security updates that they are automatically protected against exploits. They have advised users to install the update as soon as possible, even if they don't normally use IE to browse sites. For those who want to mitigate the vulnerability until the update is installed, they can do the same by removing privileges to the jscript.dll file for the Everyone group. According to Microsoft, using this mitigation will not cause problems with Internet Explorer 11,10, or 9 as they use the Jscript9.dll by default. There are no workarounds listed on the security advisory for this vulnerability. Read the full security advisory on Microsoft’s blog. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft calls on governments to regulate Facial recognition tech now, before it is too late NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release  
Read more
  • 0
  • 0
  • 13992

article-image-the-district-of-columbia-files-a-lawsuit-against-facebook-for-the-cambridge-analytica-scandal
Prasad Ramesh
20 Dec 2018
2 min read
Save for later

The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal

Prasad Ramesh
20 Dec 2018
2 min read
Karl Racine, the attorney general of the district of Columbia, Washington DC has filed a lawsuit against Facebook nine months after the Cambridge Analytica scandal that affected over 87 million people worldwide. The lawsuit, which was filed on Wednesday, talks mostly about the Cambridge Analytica scandal, which used user data without their permission. An investigation by the New York Times, earlier this week showed that Facebook had given big companies a lot of exceptions to its privacy policies. This made user data available via loopholes to companies including Amazon, Microsoft, Netflix, Spotify, and Sony. This lawsuit breaks the silence or lack of actions from US regulators on Facebook’s disregard of user data privacy. Congress, Silicon Valley critics, and even a global committee had urged Facebook to rethink its business model. Among the recent public hearings, Zuckerberg did not bother to attend the global hearing in the UK, unlike Google CEO Sundar Pichai who was present at the Congress hearing. The lawsuit states that: “Facebook collects and maintains troves of its personal user data, behavior on off its website. Facebook permits third-party developers, including application developers and device makers to access such sensitive information Facebook says that it will take appropriate steps to maintain and protect user data but has failed live up to this commitment.” “Facebook’s lax oversight of user data with respect to third-party applications. It failed to disclose the affected consumers. Facebook’s privacy settings are ambiguous and difficult to understand.” It goes on to state that this indicates Facebook’s relationship with partner companies. The lawsuit also touches on other issues regarding Facebook including its relationships with smartphone makers like BlackBerry, which could access data in ways that could access data irrespective of the users’ settings. Meanwhile, there is no news from the federal regulators. Know more about this lawsuit in detail, on the government of Columbia’s official website. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook
Read more
  • 0
  • 0
  • 13423

article-image-twitter-memes-are-being-used-to-hide-malware
Savia Lobo
19 Dec 2018
3 min read
Save for later

Twitter memes are being used to hide malware

Savia Lobo
19 Dec 2018
3 min read
Last week, a group of security researchers reported that they have found a new malware that takes its instructions from code hidden in memes posted to Twitter. This method is popularly known as Steganography, a method popularly used by cybercriminals to abstract a malicious file within an image to escape from security solutions. According to Trend Micro, some malware authors posted two tweets including malicious memes on 25th and 26th October. These images were tweeted via a Twitter account created in 2017.  “The memes contain an embedded command that is parsed by the malware after it’s downloaded from the malicious Twitter account onto the victim’s machine, acting as a C&C service for the already- placed malware”, reported Trend Micro. According to the blog post, this new threat is detected as TROJAN.MSIL.BERBOMTHUM.AA. Also, this malware gets its command from a legitimate source, which they state is a popular networking platform. The memes cannot be taken down until the malicious Twitter account is disabled. Twitter, on the other hand, has already taken the account offline as of December 13, 2018. Malicious memes are no laughing matter The memes posted via the malicious Twitter accounts have a “/print” command hidden, which enables the malware to take screenshots of the infected machine. These screenshots are then sent to a C&C server whose address is obtained through a hard-coded URL on pastebin.com. Next, the malware will send out the collected information or the command output to the attacker by uploading it to a specific URL address. According to Trend Micro, “During analysis, we saw that the Pastebin URL points to an internal or private IP address, which is possibly a temporary placeholder used by the attackers. The malware then parses the content of the malicious Twitter account and begins looking for an image file using the pattern:  “<img src=\”(.*?):thumb\” width=\”.*?\” height=\”.*?\”/>” on the account.” Source: TrendMicro Researchers have also mentioned some other commands supported by this malware, which includes /processos to retrieve the list of running processes. /clip, to capture clipboard content, /username to retrieve username from the infected machine, and /docs to retrieve filenames from a predefined path such as (desktop, %AppData% etc.) According to TechCrunch, “The malware appears to have first appeared in mid-October, according to a hash analysis by VirusTotal, around the time that the Pastebin post was first created.” After Trend Micro reported the account, Twitter pulled the account offline, suspending it permanently. How the biggest ad fraud rented Datacenter servers and used Botnet malware to infect 1.7m systems How to build a convolution neural network based malware detector using malware visualization [Tutorial] Privilege escalation: Entry point for malware via program errors
Read more
  • 0
  • 0
  • 16579

article-image-nyt-says-facebook-has-been-disclosing-personal-data-to-amazon-microsoft-apple-and-other-tech-giants-facebook-denies-claims-with-obfuscating-press-release
Melisha Dsouza
19 Dec 2018
6 min read
Save for later

NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release

Melisha Dsouza
19 Dec 2018
6 min read
“No one should trust Facebook until they change their business model.” --Roger McNamee, an early investor in Facebook. The New York Times confronted Facebook once again. The media giant obtained hundreds of Facebook internal documents that prove the tech giant has been providing some of the world’s largest technology companies “more intrusive access to users’ personal data than it has disclosed”, and “effectively exempted those business partners from its usual privacy rules”. The records were initially generated in 2017 by the company’s internal system for tracking partnerships. The Times points out how these documents helped  Facebook get more users, and lift its advertising revenue. It was a win-win situation for both, Facebook and its partner companies- where partner companies acquired features to make their products more attractive and Facebook users connected with friends across different devices and websites. The deals revealed through the documents, benefited more than 150 companies including tech businesses, online retailers and entertainment sites, automakers and media organizations. The report speculates whether Facebook ran afoul of a 2011 consent agreement with the Federal Trade Commission that barred the social network from sharing user data without explicit permission.  Mr. Satterfield, Facebook’s privacy director, said its partners were subject to “rigorous controls.” Facebook officials claimed the company had disclosed its sharing deals in its privacy policy since 2010. New York Times, however, says that the language in the policy about its service providers does not specify what data Facebook shares, and with which companies it shares them with. With most of the partnerships, Mr. Satterfield said, the F.T.C. agreement did not require Facebook to secure users’ consent before sharing data because “Facebook considered the partners' extensions of itself “. He also stated that the partners were prohibited from using personal information for other purposes and that “Facebook’s partners don’t get to ignore people’s privacy settings.” This data was shared with some of the largest names of the tech industry, including Amazon, Microsoft, and Yahoo, who claimed that they had used the data appropriately, without further expanding on the sharing deals in detail. What did the documents reveal? Here are some key points from the report that stood out: Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent. Netflix and Spotify were given the ability to read Facebook users’ private messages. Amazon was permitted to obtain users’ names and contact information through their friend. Yahoo could view streams of friends’ posts, despite public statements that it had stopped that type of sharing years earlier. Facebook obtained data from multiple partners for a friend-suggestion tool called “People You May Know.” There have been reported cases of the tool’s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim. Facebook, used contact lists from the partners, including Amazon, Yahoo, and Huawei to gain deeper insight into people’s relationships and suggest more connections. Some deals described in the documents were limited to sharing non-identifying information with research firms or enabling game makers to accommodate huge numbers of players. Some partners were allowed to see users’ contact information through their friends — even after Facebook said in 2014 that it was stripping all applications of that power. Sony, Microsoft, Amazon, and others could obtain users’ email addresses through their friends. Spotify, Netflix and the Royal Bank of Canada were allowed to read, write and delete users’ private messages. In late 2009, it launched “instant personalization” which changed the privacy settings of the 400 million people then using the service, making some of their information accessible to all of the internet. Then it shared that information, including users’ locations and religious and political leanings, with Microsoft and other partners. The F.T.C. investigated this and in 2011 cited these privacy changes as a deceptive practice. Facebook officials then stopped mentioning instant personalization in public and entered into the consent agreement. In 2014, Facebook ended instant personalization and removed access to friends’ information. But in a previously unreported agreement, the social network’s engineers continued allowing Bing; Pandora, and Rotten Tomatoes, the movie, and television review site, access to much of the data they had gotten for the discontinued feature. Facebook’s response to New York Times report In response to the New York Times report, Konstantinos Papamiltiadis, Director of Developer Platforms and Programs, said in a blog post that “To be clear: none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC”. He also explained that all the work done in the said domain was so that “ people could have more social experiences.” The post goes on to somewhat justify the claims made in the Times report. In response to the instant personalization deal that the leaked documents revealed, his statement- “ We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs.”- does raise questions on Facebook’s seriousness with respect to user privacy. The post also claims that Facebook does not “have evidence that data was used or misused after the program was shut down”. Further adding, “we shouldn’t have left the APIs in place after we shut down instant personalization.” This post has received enormous backlash from Alex Stamos, a former chief security officer at Facebook. He claims that the response is not good enough to the claims made by the Times report and that “ it makes the same mistake of blending all kinds of different integrations and models into a bunch of prose and it is very hard to match up the responses to the Times' claims.” https://twitter.com/reckless/status/1075225675756421120 That being said, he also tweets that allowing for 3rd party clients is the kind of pro-competition move we want to see from dominant platforms, however, integrations that are sneaky or send secret data to servers controlled by others really is wrong. Users have demanded Facebook come clean about the explicit details of the access deals. Some users also have spoken up on the nature of legal contracts that a user has to sign to use a particular tech service. You can head over to the New York Times for more insights on this news. British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook Ex-Facebook manager says Facebook has a “black people problem” and suggests ways to improve France to levy digital services tax on big tech companies like Google, Apple, Facebook, Amazon in the new year
Read more
  • 0
  • 0
  • 10568

article-image-scipy-1-2-0-is-out-with-a-new-optimization-algorithm-named-shgo-and-more
Savia Lobo
19 Dec 2018
3 min read
Save for later

SciPy 1.2.0 is out with a new optimization algorithm named ‘shgo’ and more!

Savia Lobo
19 Dec 2018
3 min read
Yesterday, the SciPy community released SciPy 1.2.0. This release contains many new features such as numerous bug-fixes, improved test coverage, and better documentation. This release also includes a number of deprecations and API changes. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. The functions hyp2f0, hyp1f2 and hyp3f0 in scipy.special have been deprecated. According to the community, “this will be the last SciPy release to support Python 2.7. Consequently, the 1.2.x series will be long term support (LTS) release; we will backport bug fixes until 1 Jan 2020”. Highlights of SciPy 1.2.0 This release has improvements in 1-D root finding with a new solver, toms748, and a new unified interface, root_scalar. SciPy 1.2.0 has new dual_annealing optimization method that combines stochastic and local deterministic searching. This release features a new optimization algorithm named ‘shgo (simplicial homology global optimization)’ for derivative-free optimization problems. A new category of quaternion-based transformations are available in scipy.spatial.transform New improvements in SciPy 1.2.0 scipy.ndimage improvements Proper spline coefficient calculations have been added for the mirror, wrap, and reflect modes of scipy.ndimage.rotate scipy.fftpack improvements Scipy.fftpack now supports DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization. scipy.interpolate improvements scipy.interpolate.pade now accepts a new argument for the order of the numerator. scipy.cluster improvements scipy.cluster.vq.kmeans2 has now gained a new initialization method known as kmeans++. scipy.special improvements The function softmax has been added to scipy.special. scipy.optimize improvements The one-dimensional nonlinear solvers have been given a unified interface scipy.optimize.root_scalar, similar to the scipy.optimize.root interface for multi-dimensional solvers. scipy.optimize.newton can now accept a scalar or an array. scipy.signal improvements Digital filter design functions now include a parameter to specify the sampling rate. Previously, digital filters could only be specified using normalized frequency, but different functions used different scales (e.g. 0 to 1 for butter vs 0 to π for freqz), leading to errors and confusion. scipy.sparse improvements The scipy.sparse.bsr_matrix.tocsr method is now implemented directly instead of converting via COO format, and the scipy.sparse.bsr_matrix.tocsc method is now also routed via CSR conversion instead of COO. The efficiency of both conversions is now improved. scipy.spatial improvements The function scipy.spatial.distance.jaccard has been modified to return 0 instead of np.nan when two all-zero vectors are compared. Support for the Jensen Shannon distance, the square-root of the divergence, has been added under scipy.spatial.distance.jensenshannon. A new category of quaternion-based transformations are available in scipy.spatial.transform, including spherical linear interpolation of rotations (Slerp), conversions to and from quaternions, Euler angles, and general rotation and inversion capabilities (spatial.transform.Rotation), and uniform random sampling of 3D rotations (spatial.transform.Rotation.random). scipy.stats improvements Levy Stable Parameter Estimation, PDF, and CDF calculations are now supported for scipy.stats.levy_stable. stats and mstats now have access to a new regression method, siegelslopes, a robust linear regression algorithm. The Brunner-Munzel test is now available as brunnermunzel in stats and mstats. scipy.linalg improvements scipy.linalg.lapack now exposes the LAPACK routines using the Rectangular Full Packed storage (RFP) for upper triangular, lower triangular, symmetric, or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition routines are now available as well. To know more about the SciPy 1.2.0 and the its backward incompatible changes, read the release notes on GitHub. Implementing matrix operations using SciPy and NumPy How to Compute Interpolation in SciPy How to compute Discrete Fourier Transform (DFT) using SciPy  
Read more
  • 0
  • 0
  • 3635
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-dpage-a-web-builder-to-build-web-pages-on-the-blockstack-decentralized-internet
Prasad Ramesh
19 Dec 2018
3 min read
Save for later

Introducing DPAGE, a web builder to build web pages on the Blockstack decentralized internet

Prasad Ramesh
19 Dec 2018
3 min read
DPAGE is a web page builder which developers can use to get simple web pages up and running on Blockstack's decentralized internet. DPAGE is built on top of Blockstack, an infrastructure in which you can build decentralized blockchain applications. You need a Blockstack account to log into and start using DPAGE. The Blockstack ID used to log in is stored on the blockchain. All user data is stored on a Gaia node which you can choose. This decentralized setup gives users several advantages over a conventional centralized app: Your data is yours: After using DPAGE, if you don't like it then you can create your own app. Alternatively, you can use any other web page builder and all your data will be with you and not owned by any web page/app. Users are not restricted by any vendor lock-ins. A Blockstack ID is virtually impossible to block unlike centralized identities. Google or Facebook IDs can be blocked by companies. All private user data is encrypted end-to-end. Which means that no one else can read it including the DPAGE creators. The data stored is not stored with DPAGE The profile details and user data are stored on a Blockstack’s Gaia storage hub by default. DPAGE itself doesn't store any user data on its servers. You can also run your own storage hub on a server of choice. They store data with Blockstack and they store it on ‘personal data lockers built on Google, AWS, and Azure’. It is safer than some centralized web pages As all private data is encrypted, it's more difficult for hackers to steal user data from the decentralized app. There is no central database that contains all the data, so hackers also have less incentive to hack into DPAGE. However, DDOS attacks are a possibility if the hackers target a specific Gaia hub. There is no user-specific tracking DPAGE only collects non-identifiable analytics of the users for improving the service. The service itself doesn't store or read private pages. There are some positive reactions on Hacker news: “This indeed a seriously cool product, hope more people realize it.” Another comment says: “Nice, I think this is what the web needs, a Unix approach so tools can be built on top and hosts are interchangeable.” To check out DPAGE, visit their website. The decentralized web – Trick or Treat? Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group
Read more
  • 0
  • 0
  • 6505

article-image-patreon-speaks-out-against-the-protests-over-its-banning-sargon-of-akkad-for-violating-its-rules-on-hate-speech
Natasha Mathur
19 Dec 2018
3 min read
Save for later

Patreon speaks out against the protests over its banning Sargon of Akkad for violating its rules on hate speech

Natasha Mathur
19 Dec 2018
3 min read
Patreon, a popular crowdfunding platform published a post yesterday in defense of its removal of Sargon of Akkad or Carl Benjamin, an English YouTuber famous for his anti-feminist content, last week, over the concerns of him violating its policies on hate speech. Patreon has been receiving backlash ever since from the users and patrons of the website who are calling for a boycott. “Patreon does not and will not condone hate speech in any of its forms. We stand by our policies against hate speech. We believe it’s essential for Patreon to have strong policies against hate speech to build a safe community for our creators and their patrons”, says the Patreon team. Patreon mentioned that it reviews the creations posted by the content creators on other platforms that are funded via Patreon. Since Benjamin is quite popular for his collaborations with other creators, Patreon’s community guidelines, which strictly prohibits hate speech also get applied to those collaborations. According to Patreon’s community guidelines, “Hate speech includes serious attacks, or even negative generalizations, of people based on their race [and] sexual orientation.” Benjamin in one of his interviews on another YouTuber’s channel used racial slurs linked with “negative generalizations of behavior” quite contrasting to how people of those races actually act, to insult others. Apart from using racial slurs, he also used sexual orientation related slurs which violates Patreon’s community guidelines. However, a lot of people are not happy with Patreon’s decision. For instance, Sam Harris, a popular American author, podcast host, and neuroscientist, who had one of the top-grossing accounts (with nearly 9,000 paying patrons at the end of November) on Patreon deleted his account earlier this week, accusing the platform of “political bias”. He wrote “the crowdfunding site Patreon has banned several prominent content creators from its platform. While the company insists that each was in violation of its terms of service, these recent expulsions seem more readily explained by political bias. I consider it no longer tenable to expose any part of my podcast funding to the whims of Patreon’s ‘Trust and Safety” committee’”.     https://twitter.com/SamHarrisOrg/status/1074504882210562048 Apart from banning Carl Benjamin, Patreon also banned Milo Yiannopoulos, a British public speaker and YouTuber with over 839,286 subscribers earlier this month over his association with the Proud Boys, which Patreon has classified as a hate group. https://twitter.com/Patreon/status/1070446085787668480 James Allsup, an alt-right political commentator, and associate of Yiannopoulus', was also banned from Patreon last month for their association with hate groups. Amidst this controversy, some of the top Patreon creators such as Jordan Peterson, a popular Canadian clinical psychologist whose YouTube channel has over 1.6 M subscribers and Dave Rubin, an American libertarian political commentator announced their plans of starting an alternative to Patreon, earlier this week. Peterson said that the new platform will work on a subscriber model similar to Patreon’s, only with few additional features. https://www.youtube.com/watch?v=GWz1RDVoqw4 “We understand some people don’t believe in the concept of hate speech and don’t agree with Patreon removing creators on the grounds of violating our Community Guidelines for using hate speech. We have a different view,” says the Patreon team. Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Twitter takes action towards dehumanizing speech with its new policy How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee
Read more
  • 0
  • 0
  • 20618

article-image-chrome-72-beta-releases-with-public-class-fields-user-activation-and-more
Bhagyashree R
19 Dec 2018
2 min read
Save for later

Chrome 72 Beta releases with public class fields, user activation, and more

Bhagyashree R
19 Dec 2018
2 min read
Google yesterday released Chrome 72 Beta for Android, Chrome OS, Linux, macOS, and Windows. This version comes with support for public class fields, a user activation query API, and more. Public class fields You can now declare public class fields in scripts, which can be either initialized or uninitialized. To implement public class fields, you need to declare them inside a class declaration but outside of any member functions. Support for private class fields will be added in the future releases. User activation query API Chrome 72 Beta comes with user activation query API, using which you can check whether there has been a user activation. This is introduced to avoid annoying web page behaviors such as autoplay. Additionally, it enables embedded iframes to examine postMessage() calls to determine whether they occurred within the context of a user activation. Well-formed JSON.stringify Previously, JSON.stringify used to return ill-formed Unicode strings if the input had any lone surrogates. To solve this, well-formed JSON.stringify outputs escape sequences for lone surrogates, making its output valid Unicode and representable in UTF-8. What are the modules removed? Popups during page unload is not allowed: Pages will not use window.open() to open a new page during unloading anymore. HTTP-Based Public Key Pinning (HPKP) is removed: HPKP was introduced to allow websites to send an HTTP header that pins one or more of the public keys present in the site's certificate chain. But it has seen very low adoption and can also create risks of denial of service and hostile pinning. Rendering FTP resources deprecated: Rendering resources from FTP servers is now not allowed. Directory listings will still be generated, but any non-directory listing will be downloaded rather than rendered in the browser. Along with these updates, TLS 1.0 and TLS 1.1 are deprecated and removal is expected in Chrome 81. Read the detailed list of updates on Chromium’s blog. Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018
Read more
  • 0
  • 0
  • 14831

article-image-netflix-adopts-spring-boot-as-its-core-java-framework
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Netflix adopts Spring Boot as its core Java framework

Amrata Joshi
19 Dec 2018
2 min read
This year, Netflix decided to make Spring Boot as their core Java framework, while leveraging the community’s contributions via Spring Cloud Netflix. The team at Netflix started working towards fully operating in the cloud in 2007. It also built several cloud infrastructure libraries and systems  including, Ribbon, an Inter Process Communication (IPC) library for load balancing, Eureka, an AWS service registry for service discovery, and Hystrix, latency and fault tolerance library for fault tolerance. Spring Cloud Netflix provides Netflix OSS integrations for Spring Boot apps with the help of autoconfiguration and binding to the Spring Environment.  It was updated to version 1.0. in 2015. The idea behind Spring Cloud was to bring the Netflix OSS components using Spring Boot instead of Netflix internal solutions. It has now become the preferred way for the community to adopt Netflix’s Open Source software. It features Eureka, Ribbon, and Hystrix. Why did Netflix opt for Spring Boot framework? In the early 2010s, the requirements for Netflix cloud infrastructure were efficiency, reliability, scalability, and security. Since there were no other suitable alternatives, the team at Netflix created solutions in-house. By adopting the Spring Boot framework, Netflix has managed to meet all of these requirements as it provides great experiences such as: Data access with spring-data, Complex security management with spring-security, and Integration with cloud providers with spring-cloud-aws. Spring framework also features proven and long lasting abstractions and APIs. The Spring team has also provided quality implementations from abstractions and APIs. This abstract-and-implement methodology also matches well with Netflix’ principle of being “highly aligned, loosely coupled”. “We plan to leverage the strong abstractions within Spring to further modularize and evolve the Netflix infrastructure. Where there is existing strong community direction such as the upcoming Spring Cloud Load Balancer , we intend to leverage these to replace aging Netflix software. ” - Netflix Read more about this news on Netflix Tech blog. Netflix’s culture is too transparent to be functional, reports the WSJ Tech News Today: Facebook’s SUMO challenge; Netflix AVA; inmates code; Japan’s AI, blockchain uses How Netflix migrated from a monolithic to a microservice architecture [Video]  
Read more
  • 0
  • 0
  • 30421
article-image-unity-ml-agents-toolkit-v0-6-gets-two-updates-improved-usability-of-brains-and-workflow-for-imitation-learning
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning

Sugandha Lahoti
19 Dec 2018
2 min read
Unity ML-agents toolkit v0.6 is getting two major enhancements, announced the Unity team in a blog post on Monday. The first update turns Brains from MonoBehaviors to ScriptableObjects improving their usability. The second update allows developers to record expert demonstrations and use them for offline training, providing a better user workflow for Imitation Learning. Brains are now ScriptableObjects Brains were GameObjects that were attached as children to the Academy GameObject in previous versions of ML-Agents Toolkit. This made it difficult to re-use Brains across Unity scenes within the same project. In the v0.6 release, Brains are Scriptable objects, making them manageable as standard Unity assets. This makes it easy to use them across scenes and to create Agents’ Prefabs with Brains pre-attached. The Unity team has come up with the Learning Brain Scriptable Object that replaces the previous Internal and External Brains. It has also introduced Player and Heuristic Brain Scriptable Objects to replace the Player and Heuristic Brain Types, respectively. Developers can no longer change the type of Brain with the Brain Type drop down and need to create a different Brain for Player and Learning from the Assets menu. The BroadcastHub in the Academy Component can keep a track of which Brains are being trained. Record expert demonstrations for offline training The Demonstration Recorder allows users to record the actions and observations of an Agent while playing a game. These recordings can be used to train Agents at a later time via Imitation Learning or to analyze the data. Basically, Demonstration recorder helps training data for multiple training sessions, rather than capturing it every time. Users can add the Demonstration Recorder component to their Agent, check Record and give the demonstration a name. To train an Agent with the recording, users can modify the Hyperparameters in the training configuration. Check out the documentation on Github for more information. Read more about the new enhancements on Unity Blog. Getting started with ML agents in Unity [Tutorial] Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more
Read more
  • 0
  • 0
  • 18212

article-image-oracle-releases-virtualbox-6-0-0-with-improved-graphics-user-interface-and-more
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more

Amrata Joshi
19 Dec 2018
2 min read
Yesterday, the team at Oracle released VirtualBox 6.0.0, a free and open-source hosted hypervisor for x86 computers. VirtualBox was initially developed by Innotek GmbH, which was then acquired by Sun Microsystems in 2008 and then by Oracle in 2010. VirtualBox is a virtualization product for enterprise as well as home use. It is an extremely feature rich, high-performance product for enterprise customers. Features of VirtualBox 6.0.0 User interface Virtual 6.0.0 comes with a greatly improved HiDPI and scaling support which includes better detection and per-machine configuration. User interface is simpler and more powerful. It also comes with a new file manager that enables users to control the guest file system and copy files between host and guest. Graphics VirtualBox 6.0.0 features 3D graphics support for Windows guests, and VMSVGA 3D graphics device emulation on Linux and Solaris guests. It comes with an added support for surround speaker setups. It also comes with added utility vboximg-mount on Apple hosts for accessing the content of guest disks on the host. In VirtualBox 6.0.0, there is an added support for Hyper-V to avoid the inability to run VMs at low performance. VirtualBox 6.0.0 comes with support for exporting a virtual machine to Oracle cloud infrastructure This release comes with a better application and virtual machine set-up Linux guests This release now supports Linux 4.20 and VMSVGA. The process of building vboxvideo on EL 7.6 standard kernel has been improved with this release. Other features Support for DHCP options. MacOS Guest initial support. Now it is possible to configure upto four custom ACPI tables for a VM. With this release, video and audio recordings can be separately enabled. Better support for attaching and detaching remote desktop connections. Major bug fixes The previous release used to throw wrong instruction after single-step exception with rdtsc. This issue has been resolved with this release. This release comes with improved audio/video recording. This issues with serial port emulation have been fixed. The resizing issue with disk images has been resolved. This release comes with an improved shared folder for auto-mounting. Issues with BIOS has been fixed. Read more about this news on VirtualBox’s changelog. Installation of Oracle VM VirtualBox on Linux Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS How to Install VirtualBox Guest Additions
Read more
  • 0
  • 0
  • 18035

article-image-v8-7-2-beta-releases-with-support-for-public-class-fields-well-formed-json-stringify-and-more
Bhagyashree R
19 Dec 2018
2 min read
Save for later

V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more

Bhagyashree R
19 Dec 2018
2 min read
Google’s JavaScript and WebAssembly engine, V8 has hit V8 7.2 yesterday and is currently in beta. The stable version of this release will be out in coordination with Chrome 72 Stable. V8 7.2 comes with support for embedded builtins, public class fields, well-formed JSON.stringify, and more. Following are some of the updates introduced in this version: Support for embedded builtins Embedded builtins are now supported and enabled by default on the ia32 architecture. The main aim of embedded builtins is to eliminate the per-Isolate builtin overhead by making builtins truly isolate-independent. JavaScript parsing As compared to V8 7.0 the parsing speed has improved by roughly 30% in this version. While loading the web pages 9.5% of the V8 time is spent at startup on parsing JavaScript. This parsing is drastically reduced from 9.5% to 7.5% resulting in faster load times and more responsive pages. WebAssembly improvements Code generation is improved in the top execution tier. This version comes enabled node splitting in the optimizing compiler’s scheduler and loop rotation in the backend. Also, this version introduces custom wrappers that reduce overhead in calling imported JavaScript math functions and comes with improved wrapper caching. Async stack traces A new feature called zero-cost async stack traces is introduced, which improves the error.stack property with asynchronous call frames. This feature aims to solve the problem developers were facing that the error.stack property in V8 only provides a truncated stack trace up to the most recent await. It is currently available behind the --async-stack-traces command-line flag. Public class fields This version supports public class fields and support for private class fields is planned for a future V8 release. Well-formed JSON.stringify The well-formed JSON.stringify proposal is implemented in V8 7.2. Previously, JSON.stringify used to return ill-formed Unicode strings if the input had any lone surrogates. To solve this, well-formed JSON.stringify outputs escape sequences for lone surrogates, making its output valid Unicode and representable in UTF-8. You can read the full list of updates on V8’s official website. Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon Chrome V8 7.0 is in beta, to release with Chrome 70 V8 JavaScript Engine releases version 6.9!
Read more
  • 0
  • 0
  • 13143
article-image-anthony-levandowski-announces-pronto-ai-and-makes-a-coast-to-coast-self-driving-trip
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Anthony Levandowski announces Pronto AI and makes a coast-to-coast self-driving trip

Sugandha Lahoti
19 Dec 2018
2 min read
Anthony Levandowski is back in the self-driving space with a new company. Pronto AI. This Tuesday, he announced on a blog post on Medium that he has completed a trip across the country in a self-driving car without any human intervention. He is also developing a $5,000 aftermarket driver assistance system for semi-trucks, which will handle the steering, throttle, and brakes on the highway. https://twitter.com/meharris/status/1075036576143466497 Previously, Levandowski has been at the center of a controversy between Alphabet’s self-driving car company Waymo and Uber. Levandowski had allegedly taken with him confidential documents over which the companies got into a legal battle. He was briefly barred from the autonomous driving industry during the trial. However, the companies settled the case early this year. After laying low for a while, he is back with Pronto AI and it’s first ADAS ( advanced driver assistance system). “I know what some of you might be thinking: ‘He’s back?’” Levandowski wrote in his Medium post announcing Pronto’s launch. “Yes, I’m back.” Levandowski told the Guardian that he traveled in a self-driving vehicle from San Francisco to New York without human intervention. He didn't touch the steering wheel or pedals — except for periodic rest stops — for the full 3,099 miles. He posted a video that shows a portion of the drive, though it's hard to fact-check the full journey. The car was a modified Toyota Prius which used only video cameras, computers, and basic digital maps to make the cross-country trip. In the medium blog post, he also announced the development of a new camera-based ADAS. Named Copilot by Pronto, it delivers advanced features, built specifically for Class 8 vehicles, with driver comfort and safety top of mind. It will also offer lane keeping, cruise control and collision avoidance for commercial semi-trucks and will be rolled out in early 2019. Alphabet’s Waymo to launch the world’s first commercial self-driving cars next month Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 15444

article-image-real-time-motion-planning-for-robots-made-faster-and-efficient-with-rapidplan-processor
Melisha Dsouza
19 Dec 2018
4 min read
Save for later

Real-time motion planning for robots made faster and efficient with RapidPlan processor

Melisha Dsouza
19 Dec 2018
4 min read
Yesterday, Realtime Robotics announced in a guest post in the IEEE spectrum that they have developed a new processor called ‘RapidPlan’ which tackles the bottleneck faced in a robot’s motion planning. Motion planning determines how to move a robot, or autonomous vehicle, from its current position to a desired goal configuration. Although the concept sounds simple, it is far from the same. Not only does the robot have to reach the goal state, but it also has to avoid any obstacles while doing so. According to a study, this process of collision detection- determining which edges in the roadmap (i.e., motions) cannot be used because they will result in a collision-  consumed 99 percent of a motion planner’s computing time. Traditionally, motion planning has been implemented in software running on high-performance commodity hardware. The software implementation, however, causes a multiple second delay, making it impossible to deploy robots in dynamic environments or environments with humans. Such robots can only be used in controlled environments with just a few degrees of freedom. The post suggests that motion planning can be sped up using more hardware resources and software optimizations. However, the vast computational resources of GPUs and sophisticated software maximize performance and consume a large amount of power. They cannot compute more than a few plans per second. Changes in a robot’s task or scenario often require re-tuning the software. How does RapidPlan work? A robot moving from one configuration to another configuration sweeps a volume in 3D space. Collision detection determines if that swept volume collides with any obstacle (or with the robot itself). The surfaces of the swept volume and the obstacles are represented with meshes of polygons. Collision detection comprises of computations to determine whether these polygons intersect. The challenge is titime-consumings each test to determine if two polygons intersect involves cross products, dot products, division, and other computations, and there can be millions of polygons intersection tests to perform. RapidPlan overcomes the above mentioned bottleneck and achieves general-purpose, real-time motion planning, to achieve sub-millisecond motion plans. These processors convert the computational geometry task into a much faster lookup task. At the design time itself, the processors can precompute data that records what part of 3D space these motions collide with, for a large number of motions between configurations. This precomputation— which is offline and based on simulating the motions to determine their swept volumes— is loaded onto the processor to be accessed at runtime. At runtime, the processor receives sensory input that describes what part of the robot’s environment is occupied with obstacles. The processor then uses its precomputed data to eliminate the motions that would collide with these obstacles. Realtime Robotics RapidPlan processor This processor, was developed as part of a research project at Duke University, where researchers found a way to speed up motion planning by three orders of magnitude using one-twentieth the power. Their processor checks for all potential collisions across the robot’s entire range of motion with unprecedented efficiency. RapidPlan, is retargetable, updatable on-the-fly, and has the capacity for tens of millions of edges. Inheriting many of the design principles of the original processor at Duke, the processor has a reconfigurable and more scalable design for the hardware for computing whether a roadmap edge’s motion collides with an obstacle. It has the capacity for extremely large roadmaps and can partition that capacity into several smaller roadmaps in order to switch between them at runtime with negligible delay. Additional roadmaps can also be transferred from off-processor memory on-the-fly allowing the user to, for example, have different roadmaps that correspond to different states of the end effector or for different task types. Robots with fast reaction times can operate safely in an environment with humans. A robot that can plan quickly can be deployed in relatively unstructured factories and adjust to imprecise object locations and orientations. Industries like logistics, manufacturing, health care, agriculture, domestic assistants and the autonomous vehicle industry can benefit from this processor. You can head over to IEEE Spectrum for more insights on this news. MIPS open sourced under ‘MIPS Open Program’, makes the semiconductor space and SoC, ones to watch for in 2019 Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 14769
Modal Close icon
Modal Close icon