Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-a-ransomware-attack-causes-printing-and-delivery-disruptions-for-several-major-us-newspapers
Savia Lobo
31 Dec 2018
3 min read
Save for later

A ransomware attack causes printing and delivery disruptions for several major US newspapers

Savia Lobo
31 Dec 2018
3 min read
A cyber-attack into one of United States’ biggest media groups, the Tribune Publishing, caused major printing and delivery disruptions for several major US newspapers over the weekend. This cyber attack affected the printing centers operated by the publishing firm and also its former property, the Los Angeles Times. The attack that took place on Saturday seemed to have originated from outside the United States, according to the Los Angeles Times report. This led to the distribution delays in the Saturday edition of the Times, the Tribune, the Sun and other newspapers that share a production platform in Los Angeles. According to The New York Times, “a news article in The Los Angeles Times, and one outside computer expert said the attack shared characteristics with a form of ransomware called Ryuk, which was used to target a North Carolina water utility in October and other critical infrastructure.” According to The Los Angeles Times report, “The Times and the San Diego paper became aware of the problem near midnight on Thursday. Programmers worked to isolate the bug, which Tribune Publishing identified as a malware attack, but at every turn, the programmers ran into additional issues trying to access a myriad of files, including advertisements that needed to be added to the pages or paid obituaries.” “After identifying the server outage as a virus, technology teams made progress on Friday quarantining it and bringing back servers, but some of their security patches didn’t hold and the virus began to reinfect the network, impacting a series of servers used for news production and manufacturing processes”, the report added. By late Friday, the attack was hindering the transmission of pages from offices across Southern California to printing presses as publication deadlines approached. Tribune Publishing said in a statement on Saturday, “the personal data of our subscribers, online users, and advertising clients have not been compromised. We apologize for any inconvenience and thank our readers and advertising partners for their patience as we investigate the situation.” It was unclear whether company officials have been in contact with law enforcement regarding the suspected attack. Katie Waldman, a spokeswoman for the Department of Homeland Security, said “we are aware of reports of a potential cyber incident affecting several news outlets, and are working with our government and industry partners to better understand the situation”, the Los Angeles Times reported. Pam Dixon, executive director of the World Privacy Forum, a nonprofit public interest research group, said, “usually when someone tries to disrupt a significant digital resource like a newspaper, you're looking at an experienced and sophisticated hacker”. She added that the holidays are "a well known time for mischief" by digital troublemakers because organizations are more thinly staffed. Read more about this news on The Los Angeles Times’ complete report. Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity Anatomy of a Crypto Ransomware Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks
Read more
  • 0
  • 0
  • 2976

article-image-centurylink-suffers-a-major-outage-affects-911-services-across-several-states-in-the-us
Natasha Mathur
31 Dec 2018
3 min read
Save for later

CenturyLink suffers a major outage; affects 911 services across several states in the US

Natasha Mathur
31 Dec 2018
3 min read
CenturyLink, one of the largest American telecommunications provider, suffered a major outage  that lasted for almost two days, affecting internet, television, and 911 services across the US. The outage started at 17:18 UTC on Thursday and got resolved at 19:49 UTC on Saturday, as per the Century Link’s status page. CenturyLink team was working on fixing the issue and also updated its customers on Twitter about the outage: https://twitter.com/CenturyLink/status/1078350118938730496 https://twitter.com/CenturyLink/status/1078418494427938816 https://twitter.com/CenturyLink/status/1079095167930589184 As far as the cause of the outage is concerned, CenturyLink might post a detailed analysis on the outage later, however this has not been confirmed by CenturyLink yet. As of now,  Brian Krebs, an independent investigative journalist, posted a copy of a notice on his twitter that was sent to CenturyLink’s core customers. The post gives an insight into what the cause could possibly be. https://twitter.com/briankrebs/status/1079135599309791235 The post blames a “card” at CenturyLink’s data center in Colorado for “propagating invalid frame packets across devices”. Therefore, to restore the services, the card had to be removed from the equipment along with secondary communication channel tunnels between specific devices. Additionally, a polling filter had to be applied to adjust the way the packets were being received by the equipment. The outage crippled CenturyLink’s internet, phone, television, and home-security services affecting its customers across several states in the US. Moreover, 911 services were also affected by the outage across several states in the US including Seattle, Washington, Arizona, Minnesota, and Missouri. In this case, the outage affected only cellular calls to 911, and not landline calls. Emergency alerts were sent to the residents across several states warning them of the outage and an alternate number to 911 was also tweeted out by different police departments. The US Federal Communication Commission (FCC) has launched a public investigation into this outage with FCC chairman, Ajit Pai, calling the outage “completely unacceptable”, and one whose “breadth and duration are particularly troubling”. https://twitter.com/AjitPaiFCC/status/1078678912035684353 “I have spoken with CenturyLink to underscore the urgency of of restoring service immediately. We will continue to monitor this situation closely to ensure that customers’ access to 911 is restored as quickly as possible,” added Pai. At 1:44 UTC on Saturday, the company updated on its status page that “all consumer services impacted by this event, including voice and 911, have been restored”. It took more than two days for CenturyLink to give a green signal about the outage getting resolved. The company updated at 19:49 UTC on Saturday, stating that “the network event experienced by CenturyLink Thursday has been resolved. Services for business and residential customers affected by the event have been restored. CenturyLink knows how important connectivity is to our customers, so we view any disruption as a serious matter and sincerely apologize for any inconvenience that resulted”. For more information, check out CenturyLink’s official page. Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users GitHub down for a complete day due to failure in its data storage system Fortnite server suffered a minor outage, Epic Games was quick to address the issue
Read more
  • 0
  • 0
  • 8435

article-image-haiku-beta-released-with-package-management-a-new-preflet-webkit-and-more
Amrata Joshi
28 Dec 2018
4 min read
Save for later

Haiku beta released with package management, a new preflet, webkit and more

Amrata Joshi
28 Dec 2018
4 min read
On Tuesday, the team at Haiku released Haiku beta, an open-source operating system that specifically targets personal computing. It is inspired by the BeOS and is fast, simple to use and easy to learn. What’s new in  Haiku? Package management This release comes with a complete package management system. Haiku’s packages are a special type of compressed filesystem image, that are mounted upon installation (and thereafter on each boot) by the packagefs, a kernel component. The /system/ hierarchy in Haiku beta is now read-only, since it is merely a combination of the presently installed packages at the system level and it ensures that the system files themselves are incorruptible. With this release, it is possible to boot into a previous package state or even blacklist individual files. Since the disk transactions for managing the packages are limited, the installations and uninstallations are instant. It is possible to manage the installed package set on a non-running Haiku system by mounting its boot disk and further manipulating the /system/packages directory and associated configuration files. It is now possible to switch your system repositories from master to r1beta1. WebPositive upgrades The system web browser is more stable than before with the YouTube now functioning properly and other under-the-hood changes . With WebKit it is possible to fix a large number of bugs in Haiku such as broken stack alignment, various kernel panics in the network stack, bad edge-case handling in app_server’s rendering core, missing support for extended transforms and gradients, broken picture-clipping support, missing POSIX functionality, etc. Haiku WebKit now also uses Haiku’s network protocol layer and supports Gopher. Completely rewritten network preflet The old network preflet has now been replaced with a completely new preflet, designed from the ground-up for ease of use and longevity. The preflet now can manage the network services on the machine, such as OpenSSH and ftpd. The preflet also uses a plugin-based API, so third-party network services (VPNs, web servers, etc) can integrate with it. User interface cleanup & live color updates A lot of miscellaneous cleanups to various parts of the user interface has been made since the last release. Mail and Tracker both have received a significant internal cleanup of their UI code. This release features Haiku-style toolbars and font-size awareness. Major improvements in Haiku Media subsystem improvements The media subsystem now features a large number of cleanups to the Media Kit to improve fault tolerance, latency correction, and performance issues. The HTTP and RTSP streaming support are now integrated into the I/O layer of the Media Kit. With this release, live streams can now be played in WebPositive via HTML5 audio/video support, or in the native MediaPlayer. FFmpeg decoder plugin improvements FFmpeg 4.0 is now used even on GCC2 builds. This release comes with added support for audio and video formats, as well as significant performance improvements. HDA driver improvements The driver for HDA (High-Definition Audio) chipsets now comes with audio chipsets in modern x86-based hardware. RemoteDesktop Haiku’s native RemoteDesktop application has been improved and added to the builds. This RemoteDesktop forwards drawing commands from the host system to the client system. RemoteDesktop doesn’t require any special server. It can easily connect and run applications on any Haiku system. SerialConnect This release comes with SerialConnect, which is a simple and straightforward graphical interface to serial ports. It supports arbitrary baud rates and certain extended features such as XMODEM file transfers. Built-in Debugger is now the default Haiku’s built-in Debugger has replaced GDB as the default debugger. It also features a command-line interface for those who prefer it. The debugger services the system-wide crash dialogs. launch_daemon The launch_daemon now includes support for service dependency tracking, lazy daemon startup, and automatic restart of daemons upon crashes. Updated filesystem drivers Haiku comes with NFSv4 client, a GSoC project, which is now included by default. Haiku’s userlandfs supports running filesystem drivers in userland, which is now shipped along with Haiku itself. It now supports running BeOS filesystem drivers which are not supported in kernel mode. To know more about this release, check out  Haiku’s release notes. The Haiku operating system has released R1/beta1 Haiku, the open source BeOS clone, to release in beta after 17 years of development KDevelop 5.3 released with new analyzer plugin and improved language support
Read more
  • 0
  • 0
  • 11317

article-image-apache-netbeans-ide-10-0-released-with-support-for-jdk-11-junit-5-and-more
Amrata Joshi
28 Dec 2018
2 min read
Save for later

Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!

Amrata Joshi
28 Dec 2018
2 min read
Yesterday, the team at Apache NetBeans released Apache NetBeans IDE 10.0, an integrated development environment for Java. This release focuses on adding support for JDK 11, JUnit 5, PHP, JavaScript, and Groovy. What’s new in Apache NetBeans IDE 10.0? JDK 11 Support Integration with the nb-javac project is now possible. This integration adds support for JDK 11. The CORBA modules have been removed. This release comes with a support for JEP 309, Dynamic Class-File Constants. It also supports JEP 323, Local-Variable Syntax  and LVTI for Lambda parameters. PHP Support PHP 7.3 It is now possible to add trailing commas in function calls under PHP 7.3. This release comes with support for Heredoc and Nowdoc Syntaxes. PHP 7.2 This release comes with support for trailing commas in list syntax and coloring for object types for PHP 7.2. PHP 7.1 This release comes with class constant visibility, multi-catch exception handling, nullable types, support for keys in list(), coloring for new keywords (void, iterable) for PHP 7.1. JUnit 5 JUnit 5.3.1 has been added as a new Library to NetBeans, so users can easily add it to their Java projects. JUnit 5 is now the default JUnit version for Maven projects without any existing tests. This release supports JUnit 5 @Testable annotation. This version also supports a default JUnit 5 test template. OpenJDK This release automatically detects JTReg from OpenJDK configuration. Various improvements such as limiting directories that are scanned for modules have been made to the OpenJDK project. Few users have compared Apache NetBeans IDE 10.0 with Eclipse and Intellij most of them are on the opinion that this release is better than the two and it works better. Read more about this release in detail on Apache NetBeans’ official blog. Apache NetBeans 9.0 is now available with Java 9 & 10 support Apache NetBeans 9.0 RC1 released! The NetBeans Developer’s Life Cycle
Read more
  • 0
  • 0
  • 11863

article-image-200-bitcoins-stolen-from-electrum-wallet-in-an-ongoing-phishing-attack
Melisha Dsouza
28 Dec 2018
3 min read
Save for later

200+ Bitcoins stolen from Electrum wallet in an ongoing phishing attack

Melisha Dsouza
28 Dec 2018
3 min read
Popular Bitcoin wallet Electrum and Bitcoin Cash wallet Electron Cash are subject to an ongoing phishing attack. The hacker, or hackers, have already got away with over 200 Bitcoin (around $718,000 as of press) and with the attack still ongoing, it is quite possible that they get away with much more. The phishing attack urged wallet users to download and install a malicious software update from an unauthorized GitHub repository, according to ZDNet. The hack began last Friday i.e on December 21, and the vulnerability at the heart of this attack has remained unpatched. The official Electrum blog at GitHub says that the wallet’s admins privately received a screenshot from a German chat room, in response to the issue where new malware was being distributed that disguises itself as the "real" Electrum. Source: GitHub Immediately after investigating the reasons for the error message, they silently made mitigations in 5248613 and 5dc240d; and released Electrum wallet version 3.3.2. The attacker then stopped with the phishing attack, temporarily. Yesterday, one of the electrum developers-SomberNight, announced on GitHub that the attacker has started the malicious activity again.  Electrum wallet admins are taking steps to mitigate its usability for the attacker. Execution of the ongoing phishing attack In order to launch such a major attack, the attacker added tens of malicious servers to the Electrum wallet network. When users of legitimate Electrum wallets initiate a Bitcoin transaction, and if the transaction reaches one of the malicious servers, the servers reply with an error message urging users to download a wallet app update from a malicious website (GitHub repo). If the user clicks the given link, the malicious update gets downloaded following which the app asks the user for a two-factor authentication (2FA) code. However, these 2FA codes are only requested before sending funds, and not at wallet startup. This stealthily obtains users’ 2FA code to steal their funds and transfer them to the attacker's Bitcoin addresses. The major drawback here is that Electrum servers are allowed to trigger popups with custom text inside users' wallets. Steps taken by Electrum admins to create user awareness The developers at Electrum, have updated Electrum the wallet so that whenever an attacker sends a malicious message, the message does not appear like a rich-text-based organized message. Instead, the user receives a non-formatted error that looks more like unreadable code. This alerts the user that the transaction is malicious and not a legitimate one. Following is the screenshot of how the ongoing attack looks in the new Electrum wallet version: Source: GitHub Blockchain reporter says that “The Electrum Development team has identified some 33 malicious Electrum servers, though the total number is suspected to be between 40 and 50.” You can head over to Reddit for more insights on this news. Malicious code in npm ‘event-stream’ package targets a bitcoin wallet and causes 8 million downloads in two months There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000 Bitcoin Core escapes a collapse from a Denial-of-Service vulnerability  
Read more
  • 0
  • 0
  • 10783

article-image-introducing-feelreal-sensory-mask-a-vr-mask-that-adds-a-sense-of-smell-while-viewing-vr-content
Prasad Ramesh
28 Dec 2018
2 min read
Save for later

Introducing Feelreal Sensory Mask, a VR mask that adds a sense of smell while viewing VR content

Prasad Ramesh
28 Dec 2018
2 min read
FeelReal launched their Feelreal Sensory Mask, earlier this week, that not only allows you to see things in virtual reality (VR) but also gives you a sense of smell among other senses. VR headsets have seen major progress in recent years from a higher resolution to a wider field of view. To be able to smell different odors, and other sensory inputs in a VR headset is something entirely new. Feelreal Inc brings a sensory mask that adds a sense of smell when you are viewing VR content. Feelreal puts it as: “Imagine the depth of interaction when users can truly feel themselves on a racing track and actually smell burned rubber. Or being able to grasp the feeling of being on a battlefield complete with the intense gunpowder odor. This is what the multi-sensory virtual reality experience is all about.” The company tweeted on Wednesday announcing the new VR mask: https://twitter.com/feelreal_com/status/1077963480324587521 The smells are developed using a scent generator that holds a replaceable cartridge which contains nine aroma capsules. There are 255 scents to choose from in their store. Along with providing various odors, the VR mask delivers other inputs to the user: Water Mist: The ultrasonic ionizing system will make you feel the rain on your cheeks. Heat: Safe micro-heaters will allow you to sense the warmth of the desert. Wind: Two micro-coolers will let you experience the cool mountain breeze. Vibration: Force-feedback haptic motors to induce impactful vibrations. There are many applications of the Feelreal multi-sensory mask. They can be used for movies at 360°, Feelreal dreams, VR games, immersive meditation, and aromatherapy controlled by their mobile app. You can connect the mask to Samsung Gear VR, Oculus Rift, Oculus Go, HTC Vive, or PlayStation VR via Bluetooth or WiFi. Feelreal is planning to bring this mask to Kickstarter and get funding. In 2015, they attempted crowdfunding for the mask with seven cartridges but could not get the necessary funding. The Feelreal mask comes in three colors, white, gray, and black. The Feelreal Sensory Mask is currently in the crowdfunding stage. For more details, visit the Feelreal website. Why mobile VR sucks Building custom views in vRealize operation manager [Tutorial] Google’s Daydream VR SDK finally adds support for two controllers
Read more
  • 0
  • 0
  • 3903
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-researchers-introduce-a-deep-learning-method-that-converts-mono-audio-recordings-into-3d-sounds-using-video-scenes
Natasha Mathur
28 Dec 2018
4 min read
Save for later

Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes

Natasha Mathur
28 Dec 2018
4 min read
A pair of researchers, Ruohan Gao, University of Texas and Kristen Grauman, Facebook AI research came out with a method, earlier this month, that can teach an AI system the conversion of ordinary mono sounds into binaural sounds. The researchers have termed this concept as “2.5D visual sound” and it uses a video to generate synthetic 3D sounds. Background According to the researchers, binaural audio provides a listener with the 3D sound sensation that allows a rich experience of the scene. However, these recordings are not easily available and require expertise and equipment to obtain.  Researchers state that humans generally determine the direction of a sound with the help of visual cues. So, they have used a similar technique, where a machine learning system is provided with a video involving a scene and mono sound recording. Using this video, the ML system then figures out the direction of the sounds and further distorts the “interaural time and level differences” to generate the effect of a 3D sound for the listener. Researchers mention that they have devised a deep convolutional neural network which is capable of learning how to decode the monaural (single-channel) soundtrack into its binaural counterpart. Visual information about object and scene information is injected within the CNN during the whole process. “We call the resulting output 2.5D visual sound—the visual stream helps “lift” the flat single channel audio into spatialized sound. In addition to sound generation, we show the self-supervised representation learned by our network benefits audio-visual source separation”, say researchers. Training method used For the training process, researchers first created a database of examples of the effect that it wants the machine learning system to learn. Grauman and Gao created a database using binaural recordings of over 2,265 musical clips which they had also converted into videos. The researchers mention in the paper, “Our intent was to capture a variety of sound-making objects in a variety of spatial contexts, by assembling different combinations of instruments and people in the room. We post-process the raw data into 10s clips. In the end, our BINAURAL-MUSIC-ROOM dataset consists of 2,265 short clips of musical performances, totaling 6.3 hours”. The equipment used for this project involved a 3Dio Free Space XLR binaural microphone, a GoPro HERO6 Black camera, and a Tascam DR-60D recorder as an audio pre-amplifier. The GoPro camera was mounted on top of the 3Dio binaural microphone to mimic a person seeing and hearing, respectively. The GoPro camera records videos at 30fps with stereo audio. Researchers then used these recordings from the dataset for training a machine-learning algorithm which could recognize the direction of sound from a video of the scene. Once the machine learning system learns this behavior, it is then capable of watching a video and distorting a monaural recording to simulate where the sound is ought to be coming from. Results The video shows the performance results of the research which is quite good. In the video, the results of 2.5D recordings are compared against monaural recording.                                                     2.5D Visual Sound However, it is not capable of generating a complete 3D sound and there certain situations that the algorithm finds difficult to deal with. Other than that, the ML system cannot consider any sound source that is not visible in the video, and the ones that it has not been trained on. Researchers say that this method works best for music videos and they have plans to extend its applications. “Generating binaural audio for off-the-shelf video could potentially close the gap between transporting audio and visual experiences, and will be useful for new applications in VR/AR. As future work, we plan to explore ways to incorporate object localization and motion, and explicitly model scene sounds”, say the researchers. For more information, check out the official research paper. Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 16711

article-image-researchers-introduce-a-cnn-based-system-for-identifying-radioresistant-cancer-causing-cells
Bhagyashree R
28 Dec 2018
2 min read
Save for later

Researchers introduce a CNN-based system for identifying radioresistant cancer-causing cells

Bhagyashree R
28 Dec 2018
2 min read
Earlier this year, a group of researchers from Osaka University introduced an AI system based on convolutional neural network (CNN) for automatically identifying radioresistant tumor cells. In a single cancer tumor, there can be tremendous variation in the cancer cells types which can make it difficult for doctors to make accurate assessments of cell types. Further, this process can be time-consuming and can often be hampered by human error. This AI-based system can make it easy for doctors to choose the most effective treatment and also finds applications in preclinical cancer research. Explaining the utility of this system, one of the researchers, Masayasu Toratani said, “The automation and high accuracy with which this system can identify cells should be very useful for determining exactly which cells are present in a tumor or circulating in the body of cancer patients. For example, knowing whether or not radioresistant cells are present is vital when deciding whether radiotherapy would be effective, and the same approach can then be applied after treatment to see whether it has had the desired effect.” For the study, the researchers used phase-contrast images of radioresistant clones for two cell lines: mouse squamous cell carcinoma NR-S1, and human cervical carcinoma ME-180. They gathered 10,000 images of each of the parental NR-S1 and ME-180 controls as well as radioresistant clones. VGG16, a convolutional neural network for object recognition, was then trained on 8,000 images of cells. For testing the model, the researchers used another 2,000 images to check its accuracy. The model was able to give an accuracy of 96%. As per the results, it had learned the features that distinguish mouse cancer cells from human ones, and radioresistant cancer cells from radiosensitive ones. The features extracted by this trained CNN were then plotted using t-distributed stochastic neighbor embedding, and the plot showed that the images of each cell line were well clustered. In the future, the researchers will train the system on different types of cell types to make it a universal system that can automatically identify and distinguish all variants of cancer cells. To know more in detail, check out the study published by Cancer Research. REVOLVER: A machine learning approach to forecast cancer growth Google, Harvard researchers build a deep learning model to forecast earthquake aftershocks location with over 80% accuracy Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 11874

article-image-us-government-privately-advised-by-top-amazon-executive-on-web-portal-worth-billions-to-the-amazon-the-guardian-reports
Melisha Dsouza
27 Dec 2018
5 min read
Save for later

US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports

Melisha Dsouza
27 Dec 2018
5 min read
A top Amazon executive Anne Rung, advised the Trump administration on the launch of a new internet portal which is expected to generate billions of dollars for Amazon, the Guardian reports. The emails seen by the Guardian indicate ways in which the portal will give the technology giant a dominant role in how the US government buys everything from paper clips to office chairs. Emails exchanged between Rung and the GSA Anne communicated with a top official Mary Davie at the General Services Administration (GSA) about the approach the government would take to create the new portal called as the “Amazon amendment”. This communication took place in 2017, before the legislation that created the portal was signed into law, later last year. According to an The Intercept, the amendment, Section 801 of the National Defense Authorization Act (NDAA), would allow for the creation of an online portal that government employees could use to purchase items including office supplies or furniture. Experts even commented on how this deal would help Amazon establish a tight grip on the $53 billion government acquisitions market. The emails offer new insights into how Amazon has used key former government officials it now employs (directly and as consultants ) to gain influence and potentially shape lucrative government contracts. One of the emails showed the setup of a meeting between Anne and Mary. Rung wrote: “IF the legislation is enacted, I have a sense of how GSA will want to approach this (first you have to select providers, then you will want to implement something incrementally/phased approach), but I want to make sure that I’m not way off the mark. It will help me design a discussion/agenda for our meeting next month.” On asking Davie if they should wait until after the legislation is passed to discuss it, Davie responded that the administration was planning on moving ahead regardless of the outcome of the bill on Capitol Hill. The Guardian also reports that it has not yet been determined which companies will build the US government’s new e-commerce portal, but Amazon is expected to take on a dominant role, giving it a good footing in the market for federal procurement of commercial products. This is not the first time that Amazon has involved itself with the U.S. government. Jeff Bezos, the owner of Amazon and the Washington Post, has had a troubled past with President Donald Trump. However, it seems this hostility is reserved for Trump personally and not the US government. Amazon already operates a cloud service for the US intelligence community, including a contract with the CIA, and has said it can protect even the most top secret data on a cloud, walled off from the public internet. Stacy Mitchell, co-director of the Institute for Local Self-Reliance, a group that supports local businesses said in a statement that “Amazon wants to be the interface between all government buyers and all the companies that want to sell to government, and that is an incredibly powerful and lucrative place to be.” Statements from the GSA and Amazon The Guardian says that Amazon declined to comment on questions related to how much of its business is currently connected to the federal government. However, the tech giant did admit that it had been engaging in “continuous conversations with the GSA” and that it commended the agency for “transforming the conversation around online portals”. Amazon said Rung had been compliant with all White House ethics rules. The GSA said in a statement to the Guardian that during 2017 and 2018, it had met with 35 companies in 2017 and 2018 to discuss “existing commercial capabilities and conduct market research” regarding the e-commerce platforms. The statement says: “No company has been given special access. Instead, all companies expressing interest in the Commercial Platforms program have equal access to GSA. We cannot speculate on which companies will be part of the proof of concept until proposals are received, evaluated, and awards are made.” Trouble for Rung? The Federal law mandates a “cooling off” period of one year before a former senior government official works on projects which he/she has worked on, in the government. It is not clear whether Rung’s communication would be considered a violation of this specific ethics law because there are no details on what exactly she worked on before leaving the government. Lisa Gilbert, the vice-president of legislative affairs at Public Citizen, a consumer advocacy group, said that while she did not believe that the engagement between Rung and Davie was a violation of the law, it was “unsavory” to think that former government officials used their inside knowledge of how the “ballgame” works for their corporate advantage. Gilbert said “There is nothing inherently wrong in talking to stakeholders who will be impacted by the legislation. Our overwhelming worry is that corporate stakeholders have special access that other stakeholders--like public interest groups--do not get.” Mitchell, the small business advocate, said the Rung emails “display an inside relationship that other competing companies don’t have” and show how government infrastructure was being designed with input from Amazon, giving it a big advantage. For more insights on this news, visit The Guardian’s complete coverage. NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Amazon Rekognition faces more scrutiny from Democrats and German antitrust probe Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus  
Read more
  • 0
  • 0
  • 9070

article-image-according-to-a-report-microsoft-plans-for-new-4k-webcams-featuring-facial-recognition-to-all-windows-10-devices-in-2019
Amrata Joshi
27 Dec 2018
3 min read
Save for later

According to a report, Microsoft plans for new 4K webcams featuring facial recognition to all Windows 10 devices in 2019

Amrata Joshi
27 Dec 2018
3 min read
Microsoft plans to introduce two new webcams next year. One feature is designed to extend Windows Hello facial recognition to all the Windows 10 PCs. The other feature will work with the Xbox One, bringing back the Kinect feature that let users automatically sign in by moving in front of the camera. These webcams will be working with multiple accounts and family members. Microsoft is also planning to launch its Surface Hub 2S in 2019, an interactive, digital smart board for the modern workplace that features a USB-C port and upgradeable processor cartridges. PC users have relied on alternatives from Creative, Logitech, and Razer to bring facial recognition to desktop PCs. The planned webcams will be linked to the USB-C webcams that would ship with the Surface Hub 2, whichwill be launched next year. Though the Surface Hub 2X is expected in 2020. In an interview with The Verge in October, Microsoft Surface Chief, Panos Panay suggested that Microsoft could release USB-C webcam soon. “Look at the camera on Surface Hub 2, note it’s a USB-C-based camera, and the idea that we can bring a high fidelity camera to an experience, you can probably guess that’s going to happen,” hinted Panos in October. A camera could possibly be used to extend experience beyond its own Surface devices. The camera for Windows 10, for the first time, will bring facial recognition to all Windows 10 PCs. Currently, Windows Hello facial recognition is restricted to the built-in webcams just like the ones on Microsoft's Surface devices. According to  Windows watcher Paul Thurrott, Microsoft is making the new 4K cameras for Windows 10 PCs and its gaming console Xbox One. The webcam will return a Kinect-like feature to the Xbox One which will allow users to authenticate by putting their face in front of the camera. With the recent Windows 10 update, Microsoft enabled WebAuthn-based authentication, that helps in signing into its sites such as Office 365 with Windows Hello and security keys. The Windows Hello-compatible webcams and FIDO2, a password-less sign in with Windows Hello at the core, will be launched together next year. It would be interesting to see how the new year turns out to be for Microsoft and its users with the major releases. Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 10236
article-image-pixel-3s-top-shot-camera-feature-uses-computer-vision-to-enable-users-to-click-the-perfect-picture
Bhagyashree R
27 Dec 2018
3 min read
Save for later

Pixel 3’s Top Shot camera feature uses computer vision to enable users to click the perfect picture

Bhagyashree R
27 Dec 2018
3 min read
At the ‘Made by Google’ event, Google launched Pixel 3 sharing its newly introduced camera features, and one of them was Top Shot. Last week, it shared further details on how Top Shot works. Top Shot saves and analyzes the image taken before and after the shutter press on the device in real-time using computer vision techniques and then recommends several alternative high-quality HDR+ photos. How Top Shot works? Once the shutter button is pressed, Top Shot captures up to 90 images from 1.5 seconds before and after the shutter press, and simultaneously selects up to two alternative shots to save in high resolution. These alternative shots are then processed by Visual Core as HDR+ images with a very small amount of extra latency and are embedded into the file of the Motion Photo. Source: Google AI Blog Top Shot analyzes captured images based on three attributes: functional qualities like lighting, objective attributes like whether the people in the image are smiling, and subjective qualities like emotional expressions. It does this by using a computer vision model, an optimized version of the MobileNet model, which operates in low latency, on-device mode. In early layers, the model detects low-level visual attributes like it identifies whether the subject is blurry. In subsequent layers, it detects more complex objective attributes like whether the subject's eyes are open and subjective attributes like whether there is an emotional expression of amusement or surprise. The model was trained using a technique named knowledge distillation, which compresses the knowledge in an ensemble of models into a single model, over a large number of diverse face images using quantization during both training and inference. To predict the quality scores for faces, Top Shot uses a layered Generalized Additive Model (GAM) and combines them into a weighted-average “frame faces” score. For the use cases where faces are not the primary subject, three more scores are added to the overall frame quality scores. These scores are subject motion saliency score, global motion blur score, and 3A scores (auto exposure, autofocus, and auto white balance). All these scores were used to train the model predicting an overall quality score, which matches the frame preference of human raters, to maximize end-to-end product quality. To read more in detail, check out the post on the Google AI Blog. Google launches new products, the Pixel 3 and Pixel 3 XL, Pixel Slate, and Google Home Hub Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+ Google open sources DeepLab-v3+: A model for Semantic Image Segmentation using TensorFlow
Read more
  • 0
  • 0
  • 1826

article-image-propublica-shares-learnings-of-its-facebook-political-ad-collector-project
Natasha Mathur
27 Dec 2018
5 min read
Save for later

ProPublica shares learnings of its Facebook Political Ad Collector project

Natasha Mathur
27 Dec 2018
5 min read
ProPublica, a non-profit newsroom known for its investigative journalism, published an article yesterday, written by Jeremy B Merrill, their news app developer, on their investigative project around Facebook. They collected over 100,000 targeted Facebook Ads to understand and report how political messaging works on Facebook. It was also meant to analyze how the system manipulates the public discourse. Merrill states that they launched their Facebook Political Ad Collector project in fall 2017 which was joined by over 16,000 people. As a part of the project, all the participants were required to install a browser plug-in that would anonymously send the ads they see on browsing Facebook. As the data was getting collected, it was observed that the number of ads collected from Democrats and progressive groups was larger than from Republicans or conservative groups. “We tried a number of things to make our ad collection more diverse: to start, we bought our own Facebook ads asking people across a range of states to install the ad collector. We also teamed up with Mozilla, maker of the Firefox web browser, for a special election-oriented project, in an attempt to reach a broader swath of users”, writes Merrill. However, since the political ad collector was entirely anonymous, not much information could be gathered about the audience. Another issue was that the left-leaning groups made use of Facebook advertising more than the conservative groups. To solve this issue, ProPublica partnered up with a research firm called YouGov. This was to create a panel of users ranging from a wide spectrum of demographic groups and political ideologies who would be okay with a new less-anonymous ad collector plug-in. A unique ID was assigned to these users that were tied back to the data about them such as demographics, political preference, race, and residence state, which was provided by YouGov. This partnership was funded by the Democracy Fund. YouGov was able to link the answers of the users to demographic questions such as age and partisanship to the ads received. The process of collecting data from the users of the original and publicly available ad collector plug-in that did not participate in the YouGov survey was still the same. These users still remained anonymous to ProPublica. On the other hand, ads that were seen by the participants in the YouGov survey, with their demographic data stripped, became a part of ProPublica’s existing ads database. Learnings from the project After receiving a diverse sample of data regarding the Facebook political ads, ProPublica reached the following conclusions: More than 70% of all the political ads were largely targeted by ideology. Most of these ads were presented to “at least twice as many people from one side of the political spectrum than the other”. Moreover, only about 18% of these political ads were seen by an even ratio of liberals and conservatives. One of the major advertiser targeting both the sides of the political spectrum (liberals and conservatives) was AARP ( American Association of Retired Persons). AARP had spent around $700,000 on ads starting from May to the election. Most of these ads encouraged users to vote. Other ads urged people to hold their member of Congress accountable for voting yes on “last year’s bad health care bills.” The AARP has opposed efforts to replace the Affordable Care Act. One of the ads seen by a majority of the people in the YouGov sample was from Tom Steyer’s “Need to Impeach” organization. This ad included a video saying, “We need to impeach Donald Trump before he does more damage,” for migrant children and Hurricane Maria deaths. The ad was seen mostly by the “self-identified liberals” in the YouGov’s sample. A mysterious Facebook page called “America Progress Now” was discovered by ProPublica that urged liberals to vote for Green Party candidates. “The candidates themselves had never heard of the group, and we couldn’t find any address or legal registration for it”, writes Merrill. A lot of other ads from liberal groups were seen that used misleading tactics similar to the ones used by groups such as Internet Research Agency in Russia to interfere with the 2016 US presidential elections. One such project called “Voter Awareness project” asked conservatives not to vote to re-elect Ted Cruz, the Texas Republican senator, giving examples of Trump’s previous antagonism towards him. This group was however liberal. There were other liberals too such as the Ohio gubernatorial candidate Rich Cordray and the Environmental Defense Action Fund that ran political ads from pages with names of news organizations such as the “Ohio Newswire” and “Breaking News Texas.” Merrill states that although they found out a way to determine how an ad is targeted, there are other complexities to Facebook’s systems which it can’t detect or understand. ProPublica is still looking out for answers to questions such as the impact of the algorithm used by Facebook to show ads to people who are most likely to click, the effect of some people seeing more expensive ads than others, how are cheap ads different from more expensive ones, and so on. ProPublica is currently working on the Ad Collector project and will make future announcements regarding their further studies. For more information, read the official ProPublica Post. Facebook introduces a fully convolutional speech recognition approach and open sources wav2letter++ and flashlight Facebook halted its project ‘Common Ground’ after Joel Kaplan, VP, public policy, raised concerns over potential bias allegations NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release
Read more
  • 0
  • 0
  • 9767

article-image-fortnite-server-suffered-a-minor-outage-epic-games-was-quick-to-address-the-issue
Prasad Ramesh
27 Dec 2018
2 min read
Save for later

Fortnite server suffered a minor outage, Epic Games was quick to address the issue

Prasad Ramesh
27 Dec 2018
2 min read
Yesterday, many Fortnite players reported long queues for matches and timeouts while trying to play the game. The Fortnite outage happened during the holiday season. https://twitter.com/Soldier_Dimitri/status/1078029461461913614 Epic games knew about this issue and tweeted that an investigation is underway to find the cause of the timeouts and slowdowns when some users were trying to log in and play. Epic had told the players to check the status on their website. https://twitter.com/FortniteGame/status/1078027774034657282 TechCrunch noted this Fortnite outage and was able to replicate it. The game was continuously held for about five minutes and then timed out. Epic games reported a “minor service outage” that affected game services. Within three hours of acknowledging, Epic games also issued a fix for the issue and let the users know on Twitter: https://twitter.com/FortniteGame/status/1078065448585965568 A member from Epic Games explained the reason for the Fortnite outage on Reddit: “Quick summary is that deploying a fix for elf challenge reward not being granted exposed a latent bug in our profile migration code (the code that fixes up players). This caused players to be kicked etc and triggered our waiting room. We fixed the issue and deployed a new backend, however, didn’t see a recovery in login success. This ended up due to “sticky session” configuration having been lost on our waiting room load balances when moving them to ALBs. This meant that there was a 90% chance of having to requeue after hitting the front of the line. D’oh. This should all be fixed now and we are seeing a recovery in numbers/login throughout / waiting room / etc.” Gamers have appreciated the company’s transparency in letting users know about what happened: “As a company, I applaud you for your transparency. It is highly unusual and I hope you continue to set precedence in this industry.” Fortnite creator Epic games launch Epic games store where developers get 88% of revenue earned; challenging Valve’s dominance Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready Google is missing out $50 million because of Fortnite’s decision to bypass Play Store
Read more
  • 0
  • 0
  • 14909
article-image-dragonfly-bsd-5-4-1-released-with-new-system-compiler-in-gcc-8-and-more
Amrata Joshi
26 Dec 2018
4 min read
Save for later

DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more

Amrata Joshi
26 Dec 2018
4 min read
This Christmas eve, team DragonFly released the 54th version, DragonFly BSD 5.4.1, a free and open-source Unix-like operating system. This version comes with a new system compiler in GCC 8, improved NUMA support, a large number of network and virtual machine driver updates. This release also has significant HAMMER2 improvements and better WLAN interface handling. https://twitter.com/dragonflybsd/status/1077205440650534912 What’s new in DragonFly BSD 5.4.1 Big-ticket items This release comes with much better support for asymmetric NUMA (Non-Uniform Memory Access) configurations. Both the memory subsystem and the scheduler now understand the functionality of Threadripper 2990WX's architecture. The team at DragonFly has been working on improving fairness for shared-vs-exclusive lock clashes, reducing cache ping-ponging due to non-contending SMP locks. This release comes with major updates to dports. Concurrency across multiple ttys and ptys have been improved. GCC 8 DragonFly 5.4.1 comes with GCC 8.0, and runs as the default compiler. It is also used for building dports. HAMMER2 This release comes with HAMMER2 which is the default root filesystem in non-clustered mode. It increases bulkfree cache to reduce the number of iterations required. It also fixed numerous bugs. This release comes with improved support on low-memory machines. This release comes with significant pre-work on the XOP API to help support future networked operations. Major changes Security Issues The machdep.spectre_supportsysctl can be now used to probe the spectre support, and machdep.spectre_mitigation sysctl to enable/disable support. The default /root perms has been changed from 755 to 700 in the build template. Delayed FP state has been removed to avoid the known side-channel attack. This release comes with clean FP state on switch to avoid known side-channel attack. There zero user registers on entry into kernel (syscall, interrupt, or exception) to avoid speculative side-channel attacks. Kernel This release comes with updated drm to match Linux kernel 4.7.10 in a number of locations. The radeon driver has been updated; currently matches Linux 3.18. CVE-2018-8897 has been mitigated. This release comes with an added timer support x2apic A private_data field thas been added to struct file for improving application support. SPINLOCK and acpi_timer performance has been improved. A dirty vnode management facility has been added Bottlenecks from the rlimit handling code has been removed. The size of the vm_object hash table has been increased by 4x to reduce collisions. Concurrent tmpfs and allocvnode() has been improved. The namecache performance has been improved. The syscall path has been optimized to improve performance. Driver updates With this release, serial-output-only installs are now possible. This version of DragonFly comes with  virtio_balloon memory driver. With this release, /dev/sndstat can now be opened multiple times by the same device. MosChip PCIe serial communications are now supported. Missing descriptions for usb4bsd C610/X99 controllers have been added. This release comes with an added support for PCIe serial com and console support. Old PCI and ISA serial drivers have been removed. Userland This release comes with an added rc support for ipfw3. Vis(3) and unvis(3) have been updated. With this release, pciconf database has been updated. tcsetsid() has been added to libc. The buildworld concurrency has been improved. Networking With this release, the network tunnel driver, tun(4), has been cleaned up and updated. It's now clonable for anyone building VPN links. The arp issue in the bridge code has now been fixed. Interface groups are now supported in the kernel and pf(4). The ENA(Elastic Network Adapter) network driver has been added to DragonFly 5.4.1. Package updates With this release, there are a number of options for running a web browser on DragonFly which includes, Chromium, Firefox, Opera, Midori, Palemoon, etc. Users are appreciating the efforts taken for this project and especially, the hammer storage is being appreciated. Though few users are complaining about the speed of the process which is very slow. The HAMMER2 used in this release is BSD licensed so it might have better potential as a Linux kernel module. Read more about this release on DragonFly BSD. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly As Pichai defends Google’s “integrity” ahead of today’s Congress hearing, over 60 NGOs ask him to defend human rights by dropping DragonFly
Read more
  • 0
  • 0
  • 15636

article-image-spacevim-1-0-0-released-with-improved-error-key-bindings-better-align-feature-and-more
Amrata Joshi
26 Dec 2018
2 min read
Save for later

SpaceVim 1.0.0 released with improved error key bindings, better align feature and more

Amrata Joshi
26 Dec 2018
2 min read
Yesterday, the team at SpaceVim released the first stable version of SpaceVim v1.0.0, a distribution of the vim editor that manages collections of plugins in layers. This release comes with two major changes, : The behavior of 2-LeftMouse in vimfiler has been changed. The default font has been changed to SauceCodePro. What’s new in SpaceVim v1.0.0? This version comes with unicode spinners api. Layer option for autocomplete layer has been added. Function for customizing searching tools and Lang#scheme layer have been added. This release also comes with log for bootstrap function and updated runtime log for startup. Error key bindings and Spacevim debug info have been improved. This release comes with more key bindings for typescript. Even the align feature has been improved. Major bug fixes This release comes with Ctrlp support in windows. Layers list and vimdoc command has been fixed in windows. Statusline icon has been fixed. The issue with comment paragraphs key bindings has been resolved now. Missed syntax for detached FlyGrep has now been added. Log has been added in this release for generating configuration file. FlyGrep syntax has been improvedto support different outputs. Few users are confused between SpaceVim and Neovim. Neovim is more than a rewrite of vim. Its main functionality is to provide a server that allows other editors to edit a buffer in response to keystrokes. Whereas SpaceVim is just a configuration of vim. Users are also not sure of the performance of SapceVim and they are comparing it with Spacemacs, a configuration framework for GNU Emacs. To know more about this release in detail, visit SpaceVim release notes. Qt for Python 5.12 released with PySide2, Qt GUI and more Google Cloud releases a beta version of SparkR job types in Cloud Dataproc Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more
Read more
  • 0
  • 0
  • 2431
Modal Close icon
Modal Close icon