Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
Matthew Emerick
12 Oct 2020
1 min read
Save for later

R 4.0.3 now available from Revolutions

Matthew Emerick
12 Oct 2020
1 min read
The R Core Team has released R 4.0.3 (codename: "Bunny-Wunnies Freak Out"), the latest update to the R statistical computing system. This is a minor update to the R 4.0.x series, and so should not require any changes to existing R 4.0 scripts or packages. It is available for download for Windows, Mac and Linux systems from your local CRAN mirror.  This release includes minor improvements and bug fixes including improved timezone support on MacOS, improved labels on dot charts, better handling of large tables for Fisher's Exact Test and Chi-Square tests, and control over timeouts for internet connections.  For complete details on the R 4.0.3, check out the announcement linked below. R-announce mailing list: R 4.0.3 is released  
Read more
  • 0
  • 0
  • 24811

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 24806

article-image-javascript-will-soon-support-optional-chaining-operator-as-its-ecmascript-proposal-reaches-stage-3
Bhagyashree R
28 Aug 2019
3 min read
Save for later

JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3

Bhagyashree R
28 Aug 2019
3 min read
Last month, the ECMAScript proposal for optional chaining operator reached stage 3 of the TC39 process. This essentially means that the feature is almost finalized and is awaiting feedback from users. The optional chaining operator aims to make accessing properties through connected objects easier when there are chances of a reference or function being undefined or null. https://twitter.com/drosenwasser/status/1154456633642119168 Why optional chaining operator is proposed in JavaScript Developers often need to access properties that are deeply nested in a tree-like structure. To do this, they sometimes end up writing long chains of property accesses. This can make them error-prone. If any of the intermediate references in these chains are evaluated to null or undefined, JavaScript will throw the TypeError: Cannot read property 'name' of undefined error. The optional chaining operator aims to provide a more elegant way of recovering from such instances. It allows you to check for the existence of deeply nested properties in objects. How it works is that if the operand before the operator evaluates to undefined or null, the expression will return to undefined. Or else, the property access, method or function call will be evaluated normally. MDN compares this operator with the dot (.) chaining operator. “The ?. operator functions similarly to the . chaining operator, except that instead of causing an error if a reference is null or undefined, the expression short-circuits with a return value of undefined. When used with function calls, it returns undefined if the given function does not exist,” the document reads. The concept of optional chaining is not new. Several other languages also have support for a similar feature including the null-conditional operator in C# 6 and later, optional chaining operator in Swift, and the existential operator in CoffeeScript. The optional chaining operator is represented by ‘?.’. Here’s how its syntax looks like: obj?.prop       // optional static property access obj?.[expr]     // optional dynamic property access func?.(...args) // optional function or method call Some properties of optional chaining Short-circuiting: The rest of the expression is not evaluated if an optional chaining operator encounters undefined or null at its left-hand side. Stacking: Another property of the optional chaining operator is that you can stack them. This means that you can apply more than one optional chaining operator on a sequence of property accesses. Optional deletion: You can also combine the ‘delete’ operator with an optional chain. Though there is time for the optional chaining operator to land in JavaScript, you can give it try with a Babel plugin. To stay updated with its browser compatibility, check out the MDN web docs. Many developers are appreciating this feature. A developer on Reddit wrote, “Considering how prevalent 'Cannot read property foo of undefined' errors are in JS development, this is much appreciated. Yes, you can rant that people should do null guards better and write less brittle code. True, but better language features help protect users from developer laziness.” Yesterday, the team behind V8, Chrome’s JavaScript engine, also expressed their delight on Twitter: https://twitter.com/v8js/status/1166360971914481669 Read the Optional Chaining for JavaScript proposal to know more in detail. ES2019: What’s new in ECMAScript, the JavaScript specification standard Introducing QuickJS, a small and easily embeddable JavaScript engine Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 24803

article-image-net-core-3-0-preview-6-is-available-packed-with-updates-to-compiling-assemblies-optimizing-applications-asp-net-core-and-blazor
Amrata Joshi
13 Jun 2019
4 min read
Save for later

.NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor

Amrata Joshi
13 Jun 2019
4 min read
Yesterday, the team at Microsoft announced that .NET Core 3.0 Preview 6 is now available. It includes updates for compiling assemblies for improved startup, optimizing applications for size with linker and EventPipe improvements. The team has also released new Docker images for Alpine on ARM64. Additionally they have made updates to ASP.NET Core and Blazor. The preview comes with new Razor and Blazor directive attributes as well as authentication, authorization support for Blazor apps and much more. https://twitter.com/dotnet/status/1138862091987800064 What’s new in the .NET Core 3.0 Preview 6 Docker images The .NET Core Docker images and repos including microsoft/dotnet and microsoft/dotnet-samples are updated. The Docker images are now available for .NET Core as well as ASP.NET Core on ARM64. Event Pipe enhancements With Preview 6, Event Pipe now supports multiple sessions, users can consume events with EventListener in-proc and have out-of-process event pipe clients. Assembly linking .NET core 3.0 SDK offers a tool that can help in reducing the size of apps by analyzing IL linker and cutting down on unused assemblies. Improving the startup time Users can improve the startup time of their .NET Core application by compiling their application assemblies as ReadyToRun (R2R) format. R2R, a form of ahead-of-time (AOT) compilation is supported with .NET Core 3.0. But it can’t be used with earlier versions of .NET Core. Additional functionality The Native Hosting sample posted by the team lately, demonstrates an approach for hosting .NET Core in a native application. The team is now exposing general functionality to .NET Core native hosts as part of .NET Core 3.0.  The functionality is majorly related to assembly loading that makes it easier to produce native hosts. New Razor features In this release, the team has added support for the new Razor features which are as follows: @attribute This release comes with new @attribute directive that adds specified attribute to the generated class. @code This release comes with new @code directive that is used in .razor files for specifying a code block for adding to the generated class as additional members. @key In .razor files, the new @key directive attribute is used for specifying a value that can be used by the Blazor diffing algorithm to preserve elements or components in a list. @namespace The @namespace directive works in pages and views apps and it is also supported with components (.razor). Blazor directive attributes In this Blazor release, the team has added standardized common syntax for directive attributes on Blazor which makes the Razor syntax used by Blazor more consistent and predictable. Event handlers In Blazor, event handlers now use the new directive attribute syntax than the normal HTML syntax. This new syntax is similar to the HTML syntax, but it has @ character which makes C# event handlers distinct from JS event handlers. Authentication and authorization support With this release, Blazor has a built-in support for handling authentication as well as authorization. The server-side Blazor template also supports the options that are used for enabling the standard authentication configurations with ASP.NET Core Identity, Azure AD, and Azure AD B2C. Certificate and Kerberos authentication to ASP.NET Core Preview 6 comes along with a Certificate and Kerberos authentication to ASP.NET Core. Certificate authentication requires users to configure the server for accepting certificates, and then add the authentication middleware in Startup.Configure and the certificate authentication service in Startup.ConfigureServices. Users are happy with this news and they think the updates will be useful. https://twitter.com/gcaughey/status/1138889676192997380 https://twitter.com/dodyg/status/1138897171636531200 https://twitter.com/acemod13/status/1138907195523907584 To know more about this news, check out the official blog post. .NET Core releases May 2019 updates An introduction to TypeScript types for ASP.NET core [Tutorial] What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 24787

article-image-gnome-3-32-released-with-fractional-scaling-improvements-to-desktop-web-and-much-more
Amrata Joshi
14 Mar 2019
3 min read
Save for later

GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more

Amrata Joshi
14 Mar 2019
3 min read
Yesterday, the team at GNOME released the latest version of GNOME 3, GNOME 3.32, a free open-source desktop environment for Unix-like operating systems. This release comes with improvements to desktop, web and much more. What’s new in GNOME 3.32? Fractional Scaling Fractional scaling is available as an experimental option that includes several fractional values with good visual quality on any given monitor. This feature is a major enhancement for the GNOME desktop. It requires manually adding scale-monitor-framebuffer to the settings keyorg.gnome.mutter.experimental-features. Improved data structures in GNOME desktop This release comes with improvements to foundation data structures in the GNOME Desktop for faster and snappier feel to the animations, icons and top shell panel. The search database has been improved which helps in searching faster. Even the on-screen keyboard has been improved, it now supports an emoji chooser. New automation mode in the GNOME Web GNOME Web now comes with a new automation mode which allows the application to be controlled by WebDriver. The reader mode has been enhanced now that features a set of customizable preferences and an improved style. With this release, the touchpad users can now take advantage of more gestures while browsing. For example, swipe left or right to go back or forward through browsing history. New settings for permissions Settings come with a new “Application Permissions” panel that shows resources and permissions for various applications, including installed Flatpak applications. Users can now grant permissions to certain resources when requested by the application. The Sound settings have been enhanced for supporting a vertical layout and an intuitive placement of options. With this release, the night light color temperature can now be adjusted for a warmer or cooler setting. GNOME Boxes GNOME Boxes tries to enable 3D acceleration for virtual machines if both the guest and host support it. This leads to better performance of graphics-intensive guest applications such as games and video editors. Application Management from multiple sources This release can handle apps available from multiple sources, such as Flatpak and distribution repositories. With this release, Flatpak app entries now can list the permissions required on the details page. This will give users a comprehensive understanding of what data the software will need access to. Even browsing application details will get faster now with the new XML parsing library used in this release. To know more about this release, check out the official announcement. GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32 GNOME 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes  
Read more
  • 0
  • 0
  • 24749

article-image-google-research-football-environment-a-reinforcement-learning-environment-for-ai-agents-to-master-football
Amrata Joshi
10 Jun 2019
4 min read
Save for later

Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football

Amrata Joshi
10 Jun 2019
4 min read
Last week, Google researchers announced the release of  Google Research Football Environment, a reinforcement learning environment where agents can master football. This environment comes with a physics-based 3D football simulation where agents control either one or all football players on their team, they learn how to pass between them, and further manage to overcome their opponent’s defense to score goals. The Football Environment offers a game engine, a set of research problems called Football Benchmarks and Football Academy and much more. The researchers have released a beta version of open-source code on Github to facilitate the research. Let’s have a brief look at each of the elements in the Google Research Football Environment. Football engine: The core of the Football Environment Based on the modified version of Gameplay Football, the Football engine simulates a football match including fouls, goals, corner and penalty kicks, and offsides. The engine is programmed in C++,  which allows it to run with GPU as well as without GPU-based rendering enabled. The engine allows learning from different state representations that contain semantic information such as the player’s locations and learning from raw pixels. The engine can be run in both stochastic mode as well as deterministic mode for investigating the impact of randomness. The engine is also compatible with OpenAI Gym API. Read Also: Create your first OpenAI Gym environment [Tutorial] Football Benchmarks: Learning from the actual field game The researchers propose a set of benchmark problems for RL research based on the Football Engine with the help of Football Benchmarks. These benchmarks highlight the goals such as playing a “standard” game of football against a fixed rule-based opponent. The researchers have provided three versions, the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which differ only in the strength of the opponent. They also provide benchmark results for two state-of-the-art reinforcement learning algorithms including DQN and IMPALA that can be run in multiple processes on a single machine or concurrently on many machines. Image Source: Google’s blog post These results indicate that the Football Benchmarks are research problems that vary in difficulties. According to the researchers, the Football Easy Benchmark is suitable for research on single-machine algorithms and Football Hard Benchmark is challenging for massively distributed RL algorithms. Football Academy: Learning from a set of difficult scenarios   Football Academy is a diverse set of scenarios of varying difficulty that allow researchers to look into new research ideas and allow testing of high-level concepts. It also provides a foundation for investigating curriculum learning, research ideas, where agents can learn harder scenarios. The official blog post states, “Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.” Users are giving mixed reaction to this news as some find nothing new in Google Research Football Environment. A user commented on HackerNews, “I guess I don't get it... What does this game have that SC2/Dota doesn't? As far as I can tell, the main goal for reinforcement learning is to make it so that it doesn't take 10k learning sessions to learn what a human can learn in a single session, and to make self-training without guiding scenarios feasible.” Another user commented, “This doesn't seem that impressive: much more complex games run at that frame rate? FIFA games from the 90s don't look much worse and certainly achieved those frame rates on much older hardware.” While a few others think that they can learn a lot from this environment. Another comment reads, “In other words, you can perform different kinds of experiments and learn different things by studying this environment.” Here’s a short YouTube video demonstrating Google Research Football. https://youtu.be/F8DcgFDT9sc To know more about this news, check out Google’s blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias  
Read more
  • 0
  • 0
  • 24717
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-intel-introduces-cryogenic-control-chip-horse-ridge-for-commercially-viable-quantum-computing
Fatema Patrawala
11 Dec 2019
4 min read
Save for later

Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing

Fatema Patrawala
11 Dec 2019
4 min read
On Monday, Intel Labs introduced first of its kind cryogenic control chip codenamed Horse Ridge. According to Intel, Horse Ridge will enable commercially viable quantum computers and speed up development of full-stack quantum computing systems. Intel announced that Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems. This seems to be a major milestone on the path to quantum practicality. As right now the challenge for quantum computing is that it only works at near-freezing temperatures. Intel is trying to change that with this control chip. As per Intel, Horse Ridge will be able to enable control at very low temperatures, as it will eliminate hundreds of wires going into a refrigerated case that houses the quantum computer. Horse Ridge is developed in partnership with Intel’s research collaborators at QuTech at Delft University of Technology. It is fabricated using Intel’s 22-nanometer FinFET manufacturing technology. The in-house fabrication of these control chips at Intel will dramatically accelerate the company’s ability to design, test, and optimize a commercially viable quantum computer, the company said. “A lot of research has gone into qubits, which can do simultaneous calculations. But Intel saw that controlling the qubits created another big challenge to developing large-scale commercial quantum systems,” states Jim Clarke, Director of quantum hardware, Intel in the official press release . “It’s pretty unique in the community, as we’re going to take all these racks of electronics you see in a university lab and miniaturize that with our 22-nanometer technology and put it inside of a fridge,” added Clarke. “And so we’re starting to control our qubits very locally without having a lot of complex wires for cooling.” The name “Horse Ridge” is inspired from one of the coldest regions in Oregon known as the Horse Ridge. It is designed to operate at cryogenic temperatures, approx 4 degrees Kelvin which is 7 degrees Fahrenheit and 4 degrees Celsius. What is the innovation behind Horse Ridge Quantum computers promise the potential to tackle problems that conventional computers can’t handle by themselves. Quantum computers leverage a phenomenon of quantum physics that allows qubits to exist in multiple states simultaneously. As a result, qubits can conduct a large number of calculations at the same time dramatically speeding up complex problem-solving. But Intel acknowledges the fact that the quantum research community still lags behind in demonstrating quantum practicality, a benchmark to determine if a quantum system can deliver game-changing performance to solve real-world problems. Till date, researchers have focused on building small-scale quantum systems to demonstrate the potential of quantum devices. In these efforts, researchers have relied upon existing electronic tools and high-performance computing rack-scale instruments to connect the quantum system to the traditional computational devices that regulates qubit performance and programs the system inside the cryogenic refrigerator. These devices are often custom designed to control individual qubits, requiring hundreds of connective wires in and out of the refrigerator. However, this extensive control cabling for each qubit hinders the ability to scale the quantum system to the hundreds or thousands of qubits required to demonstrate quantum practicality, not to mention the millions of qubits required for a commercially viable quantum solution. With Horse Ridge, Intel radically simplifies the control electronics required to operate a quantum system. Replacing these bulky instruments with a highly integrated system-on-chip (SoC) will simplify system design and allow for sophisticated signal processing techniques to accelerate set-up time, improve qubit performance, and enable the system to efficiently scale to larger qubit counts. “One option is to run the control electronics at room temperature and run coax cables down to configure the qubits. But you can immediately see that you’re going to run into a scaling problem because you get to hundreds or thousands of cables and it’s not going to work,” said Richard Uhlig, Managing Director Intel Labs. “What we’ve done with Horse Ridge is that it’s able to run at temperatures that are much closer to the qubits themselves. It runs at about 4 degrees Kelvin. The innovation is that we solved the challenges around getting CMOS to run at those temperatures and still have a lot of flexibility in how the qubits are controlled and configured.” To know more about this exciting news, check out the official announcement from Intel. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim The US to invest over $1B in quantum computing, President Trump signs a law Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 24698

article-image-surprise-npm-layoffs-raise-questions-about-the-company-culture
Fatema Patrawala
02 Apr 2019
8 min read
Save for later

Surprise NPM layoffs raise questions about the company culture

Fatema Patrawala
02 Apr 2019
8 min read
Headlines about the recent NPM layoff has raised questions about the company culture and ethics. NPM which stands for Node Package Manager is now being regarded as “Not Politely Managed”. The San Francisco startup, NPM Inc, the company behind the widely used NPM JavaScript package repository,  laid off 5 employees in a wrong, unprofessional and unethical manner. The incident stands imperative of the fact that many of us while accepting those lucrative job offers merely ask companies not to be unethical, and seldom expect them to actually be good. Indeed, social psychologist, Roy Baumeister convincingly argues there’s an evolutionary reason to focus more on getting people to avoid bad things than to do good things; among other reasons, humans are hardwired to consider potential threats that could harm us. Bob Sutton, author of fantastic and influential books like Good Boss, Bad Boss and The No Asshole Rule, draws on Baumeister’s work to highlight why it’s so critical to stamp out poorly behaving leaders (and employees) in organizations. Frédéric Harper, a developer advocate who was among those who lost their jobs, posted at length about the situation on Twitter. His concerns, did not come from being laid off. That happens, he said, and will happen again. "It’s the total lack of respect, empathy and professionalism of the process," he said. In an email to The Register, he said there appeared to be a disconnect between the company's professed values and its behavior. NPM layoff took its roots under the new leadership The layoffs actually started last summer when the company hired a new CEO, Bryan Bogensberger, to take the company from about $3m in annual revenue to 10x-20x, explained an early NPM employee who spoke with The Register on condition of anonymity. Bogensberger was previously the CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. , He has been running NPM since around July or August 2018, a source explained, but wasn't actually announced as CEO until January 2019 because his paperwork wasn't in order. Bryan brought in his own people, displacing longtime NPM staffers. "As he stacked the management ranks with former colleagues from a previous startup, there were unforced errors," another source explained to the Register. A culture of suspicion and hostility emerged under the new leadership. At NPM an all-hands meeting was held at which employees were encouraged to ask frank questions about the company's new direction. Those who spoke up were summarily fired last week, the individual said, at the recommendation of an HR consultant. https://twitter.com/ThatMightBePaul/status/1112843936136159232 People were very surprised by the layoffs at NPM. "There was no sign it was coming. It wasn't skills based because some of them heard they were doing great." said CJ Silverio, ex-CTO at NPM who was laid off last December. Silverio and Harper both are publicizing the layoff as they had declined to sign the non-disparagement clause in the NPM severance package. The non-disparagement clause prevents disclosure of the company’s wrongdoing publicly. A California law which came into effect in January, SB 1300 prohibits non-disparagement clause in the employment severance package but in general such clauses are legal. One of the employees fired last Friday was a month away from having stock options vest. The individual could have retained those options by signing a non-disparagement clause, but refused. https://twitter.com/neverett/status/1110626264841359360 “We can not comment on confidential personnel matters," CEO Bryan Bogensberger mentioned. "However, since November 1, we have approximately doubled in size to 55 people today, and continue to hire aggressively for many positions that will optimize and expand our ability to support and grow the JavaScript ecosystem over the long term.” Javascript community sees it as a leadership failure The community is full of outrage on this incident, many of them have regarded this as a 100% leadership failure. Others have commented that they would put NPM under the list of “do not apply” for jobs in this company. This news comes to them as a huge disappointment and there are questions asked about the continuity of the npm registry. Some of them also commented on creating a non profit node packages registry. While others have downgraded their paid package subscription to a free subscription. Rebecca Turner, core contributor to the project and one of the direct reportees to Harper has voluntarily put down her papers in solidarity with her direct reports who were let go. https://twitter.com/ReBeccaOrg/status/1113121700281851904 How goodness inspires goodness in organization Compelling research by David Jones and his colleagues finds that job applicants would prefer to work for companies that show real social responsibility–those that improve their communities, the environment, and the world. Employees are most likely to be galvanized by leaders who are actively perceived to be fair, virtuous, and self-sacrificing. Separate research by Ethical Systems founder, Jonathan Haidt demonstrates that such leaders influence employees to feel a sense of “elevation”—a positive emotion that lifts them up as a result of moral excellence. Liz Fong, a developer advocate at Honeycomb tweets on the npm layoff that she will never want to be a manager again if she had to go through this kind of process. https://twitter.com/lizthegrey/status/1112902206381064192 Layoffs becoming more common and frequent in Tech Last week we also had IBM in news for being sued by former employees for violating laws prohibiting age discrimination in the workplace: the Older Workers Benefit Protection Act (OWBPA) and the Age Discrimination in Employment Act (ADEA). Another news last week which came as a shocker was Oracle laying off a huge number of employees as a part of its “organizational restructuring”. The reason behind this layoff round was not clear, while some said that this was done to save money, some others said that people working on a legacy product were let go. While all of these does raise questions about the company culture, it may not be wrong to say that the Internet and social media makes corporate scandals harder than ever to hide. With real social responsibility easier than ever to see and applaud–we hope to see more of “the right things” actually getting done. Update from the NPM statement after 10 days of the incident After receiving public and community backlash on such actions, NPM published a statement on Medium on April 11 that, "we let go of 5 people in a company restructuring. The way that we undertook the process, unfortunately, made the terminations more painful than they needed to be, which we deeply regret, and we are sorry. As part of our mission, it’s important that we treat our employees and our community well. We will continue to refine and review our processes internally, utilizing the feedback we receive to be the best company and community we can be." Does this mean that any company for its selfish motives can remove its employees and later apologize to clean its image? Update on 14th June, Special report from The Register The Register published a special report last Friday saying that JavaScript package registry and NPM Inc is planning to fight union-busting complaints brought to America's labor watchdog by fired staffers, rather than settling the claims. An NLRB filing obtained by The Register alleges several incidents in which those terminated claim executives took action against them in violation of labor laws. On February 27, 2019, the filing states, a senior VP "during a meeting with employees at a work conference in Napa Valley, California, implicitly threatened employees with unspecified reprisals for raising group concerns about their working conditions." The document also describes a March 25, 2019, video conference call in which it was "impliedly [sic] threatened that [NPM Inc] would terminate employees who engaged in union activities," and a message sent over the company's Keybase messaging system that threatened similar reprisals "for discussing employee layoffs." The alleged threats followed a letter presented to this VP in mid-February that outlined employee concerns about "management, increased workload, and employee retention." The Register has heard accounts of negotiations between the tech company and its aggrieved former employees, from individuals apprised of the talks, during which a clearly fuming CEO Bryan Bogensberger called off settlement discussions, a curious gambit – if accurate – given the insubstantial amount of money on the table. NPM Inc has defended its moves as necessary to establish a sustainable business, but in prioritizing profit – arguably at the expense of people – it has alienated a fair number of developers who now imagine a future that doesn't depend as much on NPM's resources. The situation has deteriorated to the point that former staffers say the code for the npm command-line interface (CLI) suffers from neglect, with unfixed bugs piling up and pull requests languishing. The Register understands further staff attrition related to the CLI is expected. To know about this story in detail check out the report published by The Register. The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks npm Inc. announces npm Enterprise, the first management code registry for organizations npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 24692

article-image-why-perl-6-is-considering-a-name-change
Bhagyashree R
30 Aug 2019
4 min read
Save for later

Why Perl 6 is considering a name change?

Bhagyashree R
30 Aug 2019
4 min read
There have been several discussions around renaming Perl 6. Earlier this month, another such discussion started when Elizabeth Mattijsen, one of the Perl 6 core developers submitted the "Perl" in the name "Perl 6" is confusing and irritating issue. She suggested changing its name to Camelia, which is also the name of Perl’s mascot. In the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. Their goal was to remove the “historical warts” from the language including the confusion surrounding sigil usage for containers, the ambiguity between the select functions, and more. Based on these principles Perl was redesigned into Perl 6. For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. There are many differences between Perl 5 and Perl 6. For instance, in Perl 5 you need to choose things like concurrency system and processing utilities, but in Perl 6 these features are part of the language itself. In an interview with the I Programmer website, when asked about how the two languages differ, Moritz Lenz, a Perl and Python developer, said, “They are distinct languages from the same family of languages. On the surface, they look quite similar and they are designed using the same principles.” Why developers want to rename Perl 6 Because of the aforementioned differences, many developers find the “Perl 6” name very confusing. This name does not convey the fact that it is a brand new language. Developers may instead think that it is the next version of the Perl language. Some others may believe that it is faster, more stable, or better compared to the earlier Perl language. Also, many search engines will sometimes show results for Perl 5 instead of Perl 6. “Having two programming languages that are sufficiently different to not be source compatible, but only differ in what many perceive to be a version number, is hurting the image of both Perl 5 and Perl 6 in the world. Since the word "Perl" is still perceived as "Perl 5" in the world, it only seems fair that "Perl 6" changes its name,” Mattijsen wrote in the submitted issue. To avoid this confusion Mattijsen suggests an alternative name: Camelia. Many developers agreed with her suggestion. A developer commented on the issue, “The choice of Camelia is simple: search for camelia and language already takes us to Perl 6 pages. We can also keep the logo. And it's 7 characters long, 6-ish. So while ofun and all the others have their merits, I prefer Camelia.” In addition to Camelia, Raku is also a strong contender for the new name for Perl 6, which was suggested by Larry Wall, the creator of Perl. A developer supporting Raku said, “In particular, I think we need to discuss whether "Raku", the alternative name Larry proposed, is a viable possibility. It is substantially shorter than "Camelia" (and hits the 4-character sweet spot), it's slightly more searchable, has pleasant associations of "comfort" or "ease" in its original Japanese, in which language it even looks a little like our butterfly mascot.” Some developers were not much convinced with the idea of renaming the language and think that this rather adds more to the confusion. A developer added, “I don't see how Perl 5 is going to benefit from this. We're freeing the name, yes. They're free to reuse the versions now in however way they like, yes. Are they going to name the successor to 5.30 “Perl 6”? Of course not – that would cause more confusion, make them look stupid and make whatever spiritual successor of Perl 6 we could think of look obsolete. Would they go up to Perl 7 with the next major change? Perhaps, but they can do that anyway: they're another grown-up language that can make its own decisions :) I'm not convinced it would do anything to improve Perl 6's image either. Being Perl 6 is “standing on the shoulders of giants”. Perl is a strong brand. Many people have left it because of the version confusion, yes. But I don't imagine these people coming back to check out some new Camelia language that came out. They might, however, decide to give Perl 6 a shot if they start seeing some news about it – “oh, I was using Perl 15 years ago... is this still a thing? Is that new famous version finally being out and useful? I should check it out!” You can read the submitted issue and discussion on GitHub for more details. What’s new in programming this week Introducing Nushell: A Rust-based shell React.js: why you should learn the front end JavaScript library and how to get started Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more
Read more
  • 0
  • 0
  • 24664

article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 24660
article-image-intels-10th-gen-10nm-ice-lake-processor-offers-ai-apps-new-graphics-and-best-connectivity
Vincy Davis
02 Aug 2019
4 min read
Save for later

Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity

Vincy Davis
02 Aug 2019
4 min read
After a long wait, Intel has officially launched its first 10th generation core processors, code-named ‘Ice Lake’. The first batch contains 11 highly integrated 10nm processors which showcases high-performance artificial intelligence (AI) features and is designed for sleek 2 in 1s and laptops. The ‘Ice Lake’ processors are manufactured on Intel’s 10nm processor and consist of the 14nm chipset in the same carrier. It includes two or four Sunny Cove cores along with Intel’s Gen 11 Graphics processing unit (GPU). The 10nm measure of the processor indicates the size of the transistors used. The 10 nanometer miniscule length also shows the power of the transistor as it is considered that smaller the transistor, better is its power consumption. Read More: Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’ Chris Walker, Intel corporate vice president and general manager of Mobility Client Platforms in the Client Computing Group says that “With broad-scale AI for the first time on PCs, an all-new graphics architecture, best-in-class Wi-Fi 6 (Gig+) and Thunderbolt 3 – all integrated onto the SoC, thanks to Intel’s 10nm process technology and architecture design – we’re opening the door to an entirely new range of experiences and innovations for the laptop.” Intel was supposed to ship the 10nm processors, way back in 2016. Intel CEO Bob Swan says that the delay was due to the “company’s overly aggressive strategy for moving to its next node.” Intel has also introduced a new processor number naming structure for the 10th generation ‘Ice Lake’ processors which indicates the generation and the level of graphics performance of the processor. Image source: Intel What’s new in the 10th generation Intel core processors? Intelligent performance The 10th generation core processors are the first purpose-built processors for AI on laptops and 2 in 1s. They are built for modern AI-infused applications and contains many features such as: Intel Deep Learning Boost, used for specifically boosting flexibility to run complex AI workloads. It has a dedicated instruction set that accelerates neural networks on the CPU for maximum responsiveness. Up to 1 teraflop of GPU engine compute for sustained high-throughput inference applications Intel’s Gaussian & Neural Accelerator (GNA) provides an exclusive engine for background workloads such as voice processing and noise prevention at ultra-low power, for utmost battery life. New graphics With the Iris Plus graphics, the 10th generation core processors imparts double graphic performance in 1080p and higher-level content creation in 4K video editing, application of video filters and high-resolution photo processing. This is the first time that Intel’s Graphics processing unit (GPU) will support VESA’s Adaptive Sync* display standard. It enables a smoother gaming experience across games like Dirt Rally 2.0* and Fortnite*. According to Intel, this is the industry's first integrated GPU to incorporate variable rate shading for better rendering performance, as it uses the Gen11 graphics architecture.  The 10th generation core processors supports the BT.2020* specification, hence it is possible to view a 4K HDR video in a billion colors. Best connectivity With improved board integration, PC manufacturers can innovate on form factor for sleeker designs with Wi-Fi 6 (Gig+) connectivity and up to four Thunderbolt 3 ports. Intel claims this is the “fastest and most versatile USB-C connector available.” In the first batch of 11 'Ice Lake' processors, there are 6 Ice Lake U series and 5 Ice Lake Y series processors. Given below is the complete Ice Lake processors list. Image Source: Intel Intel has revealed that laptops with the 10th generation core processors can be expected in the holiday season this year. The post also states that they will soon release additional products in the 10th generation Intel core mobile processor family due to increased needs in computing. The upcoming processors will “deliver increased productivity and performance scaling for demanding, multithreaded workloads.”   Users love the new 10th generation core processor features and are especially excited about the Gen 11 graphics. https://twitter.com/Tribesigns/status/1133284822548279296 https://twitter.com/Isaacraft123/status/1156982456408596481 Many users are also expecting to see the new processors in the upcoming Mac notebooks. https://twitter.com/ChernSchwinn1/status/1157297037336928256 https://twitter.com/matthewmspace/status/1157295582844575744 Head over to the Intel newsroom page for more details. Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 24644

article-image-now-theres-a-cyclegan-to-visualize-the-effects-of-climate-change-but-is-this-enough-to-mobilize-action
Vincy Davis
20 May 2019
5 min read
Save for later

Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?

Vincy Davis
20 May 2019
5 min read
Climate change effects are now visible in all countries around the globe. The world is witnessing phenomena like higher temperature, flooding, ice-melting, and much more. There have been many technologies invented in the last decade to help humans understand and adapt to these climatic changes. Earlier this month, researchers from Montreal Institute for Learning Algorithms, ConscientAI Labs and Microsoft Research came up with a project that aims to generate images which will depict accurate, vivid, and personalized outcomes of climate change using Machine Learning (ML) and Cycle-Consistent Adversarial Networks (CycleGANs). This will enable individuals to make more informed choices about their climate future by creating an understanding of the effects of climate change, while maintaining scientific credibility using climate model projections. The project is to develop a Machine Learning (ML) based tool. This tool will show in a personalized way, the probable effect that climate change will have on a specific location familiar to the viewer. When given an address, the tool will generate an image projecting transformations which are likely to occur there, based on a formal climate model. For the initial version, the generated images consist of houses and buildings specifically after flooding events. The challenge in generating realistic images using Cycle-Consistent Adversarial Networks (CycleGANs) is to collect the training data needed in order to extract the mapping function. The researchers manually searched open source photo-sharing websites for images of houses from various neighborhoods and settings, such as suburban detached houses, urban townhouses, and apartment buildings. They have gathered over 500 images of non-flooded houses and the same number of flooded locations, and re-sized them to 300x300 pixels. The networks were trained using the publicly available PyTorch. The CycleGAN model for 200 epochs were trained using the trained images, using the Adam solver with a batch size of 1 and training the model from scratch with a learning rate of 0.0002. As per the CycleGAN training procedure, the learning rate is constant for the first 100 epochs and linearly decayed to zero over the next 100 epochs. Project Output and Future Plan The trained CycleGAN model was successful in learning an adequate mapping between grass and water, which could be applied to generate fairly realistic images of flooded houses. This will work best with single-family, suburban-type houses which are surrounded by an expanse of grass. From the 80 images in the test set, it was found that about 70% were successfully mapped to realistically flooded houses. This initial version of the CycleGAN model will illustrate the feasibility of applying generative model to create personalized images of an extreme climate event i.e., flooding, that is expected to increase in frequency based on climate change projections. Subsequent versions of this model will integrate more varied types of houses and surroundings, as well as different types of climate-change related extreme event phenomena (i.e. droughts, hurricanes, wildfires, air pollution etc), depending on the expected impacts at a given location, as well as forecast time horizons. There’s still scope for improvement with regard to the color scheme of the generated images and the visual artifacts. Furthermore to channel the emotional response of the public, into behavioural change or actions, the researchers are planning another improvement to the model called ‘choice knobs’. This will enable users to visually see the impact of their personal choices, such as deciding to use more public transportation, as well as the impact of broader policy decisions, such as carbon tax and increasing renewable portfolio standards. The projects greater aim is to help the general population progress towards more visible public support for climate change mitigation steps on a national level, facilitating governmental interventions and helping make the required rapid changes to a global sustainable economy. The researchers have stated that they need to explore more physical constraints to GAN training in order to incorporate more physical knowledge into these projections. This will enable a GAN model to transform a house to its projected flooded state and also take into account the forecast simulations of the flooding event represented by the physical variable outputs and probabilistic scenarios by a climate model for a given location. Response to the project Few developers have liked the idea of using technology, to produce realistic images depicting the effect of climate change in your own hometown which may make people understand the adverse effects of it. https://twitter.com/jameskobielus/status/1129392932988096513 While some developers are not sure if showing people a picture of their house submerged in water is going to create any difference. A user on Hacker news comments, “The threshold for believing the effects of climate change has to change from reading/seeing to actually being there and touching it. Or some far more reliable system of remote verification has to be established” Another user adds, “Is this a real paper? It's got to be a joke, right? a parody? It's literally a request to develop images to be used for propaganda purposes. And for those who will say that climate change is going to end the world, yeah, but that doesn't mean we should develop propaganda technology that could be used for some other political purpose.” There are already many studies/evidences to make people aware of the effects of climate change, depicting a picture of their house submerged in water is not going to move them anymore. Climate change is already happening and effecting our day to day lives. What we need now are stronger approaches towards analysing, mitigating, and adapting to these changes and inspiring more government policies to fight against these climate changes. To know more details about the project, head over to the research paper. Read More Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change
Read more
  • 0
  • 0
  • 24598

article-image-google-services-protonmail-and-protonvpn-suffered-an-outage-yesterday
Amrata Joshi
20 Aug 2019
3 min read
Save for later

Google services, ProtonMail and ProtonVPN suffered an outage yesterday

Amrata Joshi
20 Aug 2019
3 min read
Yesterday, Google reported that it is facing an issue with authentication to Google App Engine sites, Identity Aware Proxy, the Google Cloud Console, and Google OAuth 2.0 endpoints. This began at 11:30 PT yesterday and it ended at 1:27 PT.  With Google, ProtonMail and ProtonVPN also experienced an outage yesterday. Due to the server failure, few accounts were temporarily unavailable. Proton’s infrastructure team found out the issue and fixed it and the systems are now functioning normally. The official post reads, “We apologize for the inconvenience. No emails or data were lost, but some incoming emails may be delayed.” Google Cloud Status Dashboard reads, "We are currently experiencing an issue with authentication to Google App Engine sites, the Google Cloud Console, Identity Aware Proxy, and Google OAuth 2.0 endpoints.” https://twitter.com/crash_signal/status/1163526809847324675 DownDetector confirmed that there is an outage in some parts of the world and few users on Twitter confirmed the news.  Services like Google, Google Drive, and Gmail experienced issues. Few users weren’t able to access the services and others faced slow loading times.  Google’s OAuth which is used for Google Sign-In services failed yesterday, so a lot of users were unable to log in. OAuth is used for signing in to Google services such as Gmail, Google Calendar, and Chromebooks. It is also used by other applications when users use Google to log into them. And if any issues come up in OAuth then users will not be able to log into any of the above-mentioned services. At 12:18 PT, few customers reported that they could successfully attempt to utilize an incognito window under the Chrome browser to log in. At 12:43 PT, the team reported that mitigation work was on and error rates have fallen down.  This outage is the fourth one in a row, last month, Google suffered an outage as an issue was reported with Cloud Networking and Load balancing within us-east1. Just two months ago Google Calendar was down for almost three hours around the world. In the same month,  Google Cloud suffered a major outage which took down many Google services including YouTube, GSuite, Gmail, etc. Yesterday at 1:27 PT, the team updated, “The issue with authentication to Google App Engine sites, the Google Cloud Console, Identity Aware Proxy, and Google OAuth 2.0 endpoints has been resolved for all affected customers as of Monday, 2019-08-19 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.” Few users are sceptical about the company’s recent outages, a user commented on HackerNews, “Google often has a outage or two around this time of the year when all the US schools come back and millions of students log in at the same time.” Another user commented, “It sure feels like there have been quite a few big outages this summer (Google in particular). I wonder if they are getting sloppy or this is just bad luck?” https://twitter.com/FXE4008/status/1163522257119043590 https://twitter.com/gabe565/status/1163519934095343616 Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off  
Read more
  • 0
  • 0
  • 24502
article-image-now-there-is-a-deepfake-that-can-animate-your-face-with-just-your-voice-and-a-picture-using-temporal-gans
Savia Lobo
24 Jun 2019
6 min read
Save for later

Now there is a Deepfake that can animate your face with just your voice and a picture using temporal GANs

Savia Lobo
24 Jun 2019
6 min read
Last week, researchers from the Imperial College in London and Samsung’s AI research center in the UK revealed how deepfakes can be used to generate a singing or talking video portrait by from a still image of a person and an audio clip containing speech. In their paper titled, “Realistic Speech-Driven Facial Animation with GANs”, the researchers have used temporal GAN which uses 3 discriminators focused on achieving detailed frames, audio-visual synchronization, and realistic expressions. Source: arxiv.org “The generated videos are evaluated based on sharpness, reconstruction quality, lip-reading accuracy, synchronization as well as their ability to generate natural blinks”, the researchers mention in their paper. https://youtu.be/9Ctm4rTdVTU Researchers used the GRID, TCD TIMIT, CREMA-D and LRW datasets. The GRID dataset has 33 speakers each uttering 1000 short phrases, containing 6 words randomly chosen from a limited dictionary. The TCD TIMIT dataset has 59 speakers uttering approximately 100 phonetically rich sentences each. The CREMA-D dataset includes 91 actors coming from a variety of different age groups and races utter 12 sentences. Each sentence is acted out by the actors multiple times for different emotions and intensities. Researchers have used the recommended data split for the TCD TIMIT dataset but exclude some of the test speakers and use them as a validation set. Researchers performed data augmentation on the training set by mirroring the videos. Metrics used to assess the quality of generated videos Researchers evaluated the videos using traditional image reconstruction and sharpness metrics. These metrics can be used to determine frame quality; however, they fail to reflect other important aspects of the video such as audio-visual synchrony and the realism of facial expressions. Hence they have also proposed alternative methods capable of capturing these aspects of the generated videos. Reconstruction Metrics This method uses common reconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index to evaluate the generated videos. However, the researchers reveal that “reconstruction metrics will penalize videos for any facial expression that does not match those in the ground truth videos”. Sharpness Metrics The frame sharpness is evaluated using the cumulative probability blur detection (CPBD) measure, which determines blur based on the presence of edges in the image. For this metric as well as for the reconstruction metrics larger values imply better quality. Content Metrics The content of the videos is evaluated based on how well the video captures the identity of the target and on the accuracy of the spoken words. The researchers have verified the identity of the speaker using the average content distance (ACD), which measures the average Euclidean distance of the still image representation, obtained using OpenFace from the representation of the generated frames. The accuracy of the spoken message is measured using the word error rate (WER) achieved by a pre-trained lip-reading model. They used the LipNet model which exceeds the performance of human lip-readers on the GRID dataset. For both content metrics, lower values indicate better accuracy. Audio-Visual Synchrony Metrics Synchrony is quantified in Joon Son Chung and Andrew Zisserman’s “Out of time: automated lip sync in the wild”. In this work Chung et al. propose the SyncNet network which calculates the euclidean distance between the audio and video encodings on small (0.2 second) sections of the video. The audio-visual offset is obtained by using a sliding window approach to find where the distance is minimized. The offset is measured in frames and is positive when the audio leads the video. For audio and video pairs that correspond to the same content, the distance will increase on either side of the point where the minimum distance occurs. However, for uncorrelated audio and video, the distance is expected to be stable. Based on this fluctuation they further propose using the difference between the minimum and the median of the Euclidean distances as an audio-visual (AV) confidence score which determines the audio-visual correlation. Higher scores indicate a stronger correlation, whereas confidence scores smaller than 0.5 indicate that Limitations and the possible misuse of Deepfake The limitation of this new Deepfake method is that it only works for well-aligned frontal faces. “the natural progression of this work will be to produce videos that simulate in wild conditions”, the researchers mention. While this research appears the next milestone for GANs in generating videos from still photos, it also may be misused for spreading misinformation by morphing video content from any still photograph. Recently, at the House Intelligence Committee hearing, Top House Democrat Rep. Adam Schiff (D-CA) issued a warning on Thursday that deepfake videos could have a disastrous effect on the 2020 election cycle. “Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.” The hearing came only a few weeks after a real-life instance of a doctored political video, where the footage was edited to make House Speaker Nancy Pelosi appear drunk, that spread widely on social media. “Every platform responded to the video differently, with YouTube removing the content, Facebook leaving it up while directing users to coverage debunking it, and Twitter simply letting it stand,” The Verge reports. YouTube took the video down; however, Facebook refused to remove the video. Neil Potts, Public Policy Director of Facebook had stated that if someone posted a doctored video of Zuckerberg, like one of Pelosi, it would stay up. After this, on June 11, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. https://twitter.com/motherboard/status/1138536366969688064 Omer Ben-Ami, one of the founders of Canny says that the video is made to educate the public on the uses of AI and to make them realize the potential of AI. Though Zuckerberg’s video was to retain the educational value of Deepfakes, this shows the potential of how it can be misused. Although some users say it has interesting applications, many are concerned that the chances of misusing this software are more than putting it into the right use. https://twitter.com/timkmak/status/1141784420090863616 A user commented on Reddit, “It has some really cool applications though. For example in your favorite voice acted video game, if all of the characters lips would be in sync with the vocals no matter what language you are playing the game in, without spending tons of money having animators animate the characters for every vocalization.” To know more about this new Deepfake, read the official research paper. Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 24497

article-image-wireguard-launches-an-official-macos-app
Savia Lobo
19 Feb 2019
2 min read
Save for later

WireGuard launches an official MacOS app

Savia Lobo
19 Feb 2019
2 min read
Last week, the team at WireGuard announced an initial version of WireGuard for macOS and have also launched the app in the Mac App Store. The app makes use of Apple’s Network Extension API to provide native integration into the operating system's networking stack. WireGuard is a fast, modern, and secure VPN tunnel. The new app can import new tunnels from archives and files, or you can create one from scratch. Developers said that it is currently undergoing rapid development, and taking feedback from users in implementing new and exciting features. According to TechCrunch, “The app is a drop-down menu in the menu bar. You can manage your tunnel and activate on-demand connections for some scenarios. For instance, you could choose to activate your VPN exclusively if you’re connected to the internet using Wi-Fi, and not Ethernet.” Source: TechCrunch Developers say “because the app uses these deep integration APIs, we're only allowed to distribute the application using the macOS App Store (whose rejections, appeals, and eventual acceptance made for quite the stressful saga over the last week and a half).” To know more about this announcement, read WireGuard’s email thread. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers Final release for macOS Mojave is here with new features, security changes and a privacy flaw
Read more
  • 0
  • 0
  • 24432
Modal Close icon
Modal Close icon