Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-15-millions-jobs-in-britain-at-stake-with-ai-robots-set-to-replace-humans-at-workforce
Natasha Mathur
23 Aug 2018
3 min read
Save for later

15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce

Natasha Mathur
23 Aug 2018
3 min read
Earlier this week, the Bank of England’s chief economist, Andy Haldane, gave a warning that the UK needs a skills revolution as up to 15 million jobs in Britain are at stake. This is apparently due to a “third machine age” where Artificial Intelligence is making a huge number of jobs that were previously the preserve of humans outdated. Haldane says that this potential "Fourth Industrial Revolution" could cause disruptions on a "much greater scale" than the damage experienced during the first three Industrial Revolutions. This is because the first three industrial revolutions were mainly about machines replacing humans doing manual tasks.  But, the fourth Industrial revolution will be different. As Haldane told the BBC Radio 4’s Today programme, “the 20th-century machines have substituted not just for manual human tasks, but cognitive ones too -- human skills machines could reproduce, at lower cost, has both widened and deepened”. With robots becoming more intelligent, there will be deeper degrees of hollowing-out of jobs in this revolution than in the past. The Bank of England has classified jobs into three categories –jobs with a high (greater than 66%), medium (33-66%) and low (less than 33%) chances of automation. Administrative, clerical and production jobs are at the highest risk of getting replaced by Robots. Whereas, jobs focussing on human interaction, face-to-face conversation, and negotiation are less likely to suffer. Probability of automation by occupation This “hollowing out” poses risk not only for low-paid jobs but will also affect the mid-level jobs. Meanwhile, the UK’s Artificial Intelligence Council Chair, Tabitha Goldstaub, mentioned that the “challenge will be ensuring that people are prepared for the cultural and economic shifts” with focus on creating "the new jobs of the future" in order to avoid mass replacement by robots. Haldane echoed Goldstaub’s sentiments and told the BBC that “we will need even greater numbers of new jobs to be created in the future if we are not to suffer this longer-term feature called technological unemployment”. Every cloud has a silver lining Although the automation of these tasks can lead to mass unemployment, Goldstaub is positive. She says “there are great opportunities ahead as well as significant challenges”. Challenge being bracing the UK workforce for the coming change. Whereas, the silver lining, according to Goldstaub is that “there is a hopeful view -- that a lot of these jobs (existing) are boring, mundane, unsafe, drudgery - there could be -- liberation from -- these jobs and a move towards a brighter world.” OpenAI builds reinforcement learning based system giving robots human like dexterity OpenAI Five bots beat a team of former pros at Dota 2 What if robots get you a job! Enter Helena, the first artificial intelligence recruiter  
Read more
  • 0
  • 0
  • 18219

article-image-facebooks-glow-a-machine-learning-compiler-to-be-supported-by-intel-qualcomm-and-others
Bhagyashree R
14 Sep 2018
3 min read
Save for later

Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others

Bhagyashree R
14 Sep 2018
3 min read
Yesterday, Facebook announced that Cadence, Esperanto, Intel, Marvell, and Qualcomm Technologies Inc, have committed to support their Glow compiler in future silicon products. Facebook, with this partnership aims to build a hardware ecosystem for machine learning. With Glow, their partners will be able to rapidly design and optimize new silicon products for AI and ML and help Facebook scale their platform. They are also planning to expand this ecosystem by adding more partners in 2018. What is Glow? Glow is a machine learning compiler which is used to speed up the performance of deep learning frameworks on different hardware platforms. The name “Glow” comes from Graph-Lowering, which is the main method that the compiler uses for generating efficient code. This compiler is designed to allow state-of-the-art compiler optimizations and code generation of neural network graphs. With Glow, hardware developers and researchers can focus on building next generation hardware accelerators that can be supported by deep learning frameworks like PyTorch. Hardware accelerators for ML solve a range of distinct problems. Some focus on inference, while others focus on training. How it works? Glow accepts a computation graph from deep learning frameworks such as, PyTorch and TensorFlow and generates highly optimized code for machine learning accelerators. To do so, it lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation: Source: Facebook High-level intermediate representation allows the optimizer to perform domain-specific optimizations. Lower-level intermediate representation, an instruction-based address-only representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation, and copy elimination. The optimizer then performs machine-specific code generation to take advantage of specialized hardware features. Glow supports a high number of input operators as well as a large number of hardware targets with the help of its lowering phase, which eliminates the need to implement all operators on all targets. The lowering phase reduces the input space and allows new hardware backends to focus on a small number of linear algebra primitives. You can read more about Facebook’s goals for Glow in its official announcement. If you are interesting in knowing how it works in more detail, check out this research paper and also its GitHub repository. Facebook launches LogDevice: An open source distributed data store designed for logs Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN
Read more
  • 0
  • 0
  • 18218

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 18214

article-image-unity-ml-agents-toolkit-v0-6-gets-two-updates-improved-usability-of-brains-and-workflow-for-imitation-learning
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning

Sugandha Lahoti
19 Dec 2018
2 min read
Unity ML-agents toolkit v0.6 is getting two major enhancements, announced the Unity team in a blog post on Monday. The first update turns Brains from MonoBehaviors to ScriptableObjects improving their usability. The second update allows developers to record expert demonstrations and use them for offline training, providing a better user workflow for Imitation Learning. Brains are now ScriptableObjects Brains were GameObjects that were attached as children to the Academy GameObject in previous versions of ML-Agents Toolkit. This made it difficult to re-use Brains across Unity scenes within the same project. In the v0.6 release, Brains are Scriptable objects, making them manageable as standard Unity assets. This makes it easy to use them across scenes and to create Agents’ Prefabs with Brains pre-attached. The Unity team has come up with the Learning Brain Scriptable Object that replaces the previous Internal and External Brains. It has also introduced Player and Heuristic Brain Scriptable Objects to replace the Player and Heuristic Brain Types, respectively. Developers can no longer change the type of Brain with the Brain Type drop down and need to create a different Brain for Player and Learning from the Assets menu. The BroadcastHub in the Academy Component can keep a track of which Brains are being trained. Record expert demonstrations for offline training The Demonstration Recorder allows users to record the actions and observations of an Agent while playing a game. These recordings can be used to train Agents at a later time via Imitation Learning or to analyze the data. Basically, Demonstration recorder helps training data for multiple training sessions, rather than capturing it every time. Users can add the Demonstration Recorder component to their Agent, check Record and give the demonstration a name. To train an Agent with the recording, users can modify the Hyperparameters in the training configuration. Check out the documentation on Github for more information. Read more about the new enhancements on Unity Blog. Getting started with ML agents in Unity [Tutorial] Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more
Read more
  • 0
  • 0
  • 18212

article-image-qt-5-13-releases-with-a-fully-supported-webassembly-module-chromium-73-support-and-more
Bhagyashree R
20 Jun 2019
3 min read
Save for later

Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more

Bhagyashree R
20 Jun 2019
3 min read
Yesterday, the team behind Qt announced the release of Qt 5.13. This release comes with fully-supported Qt for WebAssembly, Chromium 73-based QT WebEngine, and many other updates. In this release, the Qt community and the team have focused on improving the tooling to make designing, developing, and deploying software with Qt more efficient. https://twitter.com/qtproject/status/1141627444933398528 Following are some of Qt 5.13 highlights: Fully-supported Qt for WebAssembly Qt for WebAssembly makes it possible to build Qt applications for web browsers. The team previewed this platform in Qt 5.12 and beginning this release Qt for WebAssembly is fully-supported. This module uses Emscripten, the LLVM to JavaScript compiler to compile Qt applications for a web server. This will allow developers to run their native applications in any browser provided it supports WebAssembly. Updates in the QT QML module The QT QML module enables you to write applications and libraries in the QML language. Qt 5.13 comes with improved support for enums declared in  C++. With this release, JavaScript “null” as the binding value will be optimized at compile time. Also, QML will now generate function tables on 64-bit Windows making it possible to unwind the stack through JITed functions. Updates in Qt Quick and Qt Quick Controls 2 Qt Quick is the standard library for writing QML applications, which provides all the basic types required for creating user interfaces. With this release, support is added to TableView that allows hiding rows and columns. Qt Quick Controls 2 provides a set of UI controls for creating user interfaces. This release brings a new control named SplitView using which you can lay out items horizontally or vertically with a draggable splitter between each item. Additionally, the team has also added a cache property to the icon. Qt WebEngine Qt WebEngine provides a web browser engine that makes embedding content from the web into your applications easier on platforms that do not have a native web engine. This engine uses the code from the open-source Chromium project. Qt WebEngine is now based on Chromium 73. This latest version supports PDF viewing via an internal Chromium extension, Web Notifications API, and thread-safe and page-specific URL request interceptors. It also comes with an application-local client certificate store and client certificate support from QML. Lars Knoll, Qt’s CTO and Tuukka Turunen, Qt’s Head of R&D will be holding a webinar on July 2 to summarize all the news around Qt 5.13. Read the official announcement on Qt’s official website to know more in detail. Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial] Qt Creator 4.9 Beta released with QML support, programming language support and more!
Read more
  • 0
  • 0
  • 18210

article-image-a-new-wpa-wpa2-security-attack-in-town-wi-fi-routers-watch-out
Savia Lobo
07 Aug 2018
3 min read
Save for later

A new WPA/WPA2 security attack in town: Wi-fi routers watch out!

Savia Lobo
07 Aug 2018
3 min read
Jens "atom" Steube, the developer of the popular Hashcat password cracking tool recently developed a new technique to obtain user credentials over WPA/WPA2 security. Here, attackers can easily retrieve the Pairwise Master Key Identifier (PMKID) from a router. WPA/WPA2, the Wi-Fi security protocols, enable a wireless and secure connection between devices using encryption via a PSK(Pre-shared Key). The WPA2 protocol was considered as highly secure against attacks. However, a method known as KRACK attack discovered in October 2017 was successful in decrypting the data exchange between the devices, theoretically. Steube discovered the new method when looking for new ways to crack the WPA3 wireless security protocol. According to Steube, this method works against almost all routers utilizing 802.11i/p/q/r networks with roaming enabled. https://twitter.com/hashcat/status/1025786562666213377 How does this new WPA/WPA2 attack work? The new attack method works by extracting the RSN IE (Robust Security Network Information Element) from a single EAPOL frame. RSN IE is an optional field containing the PMKID generated by a router when a user tries to authenticate. Previously, for cracking user credentials, the attacker had to wait for a user to login to a wireless network. They could then capture the four-way handshake in order to crack the key. However, with the new method, an attacker has to simply attempt to authenticate to the wireless network in order to retrieve a single frame to get access to the PMKID. This can be then used to retrieve the Pre-Shared Key (PSK) of the wireless network. A boon for attackers? The new method makes it easier to access the hash containing the pre-shared key, which needs to be cracked. However, this process takes a long time depending on the complexity of the password. Most users don’t change their wireless password and simply use the PSK generated by their router. Steube, in his post on Hashcat, said,"Cracking PSKs is made easier by some manufacturers creating PSKs that follow an obvious pattern that can be mapped directly to the make of the routers. In addition, the AP mac address and the pattern of the ESSID  allows an attacker to know the AP manufacturer without having physical access to it." He also stated that attackers pre-collect the pattern used by the manufacturers and create generators for each of them, which can then be fed into Hashcat. Some manufacturers use patterns that are too large to search but others do not. The faster one’s hardware is, the faster one can search through such a keyspace. A typical manufacturer’s PSK of length 10 takes 8 days to crack (on a 4 GPU box). How can users safeguard their router’s passwords? Creating one’s own key rather than using the one generated by the router. The key should be long and complex by consisting of numbers, lower case letters, upper case letters, and symbols (&%$!) Steube personally uses a password manager and lets it generate truly random passwords of length 20 - 30. One can follow the researcher's footsteps in safeguarding their routers or use the tips he mentioned above. Read more about this new WiFi security attack on Hashcat forum. NetSpectre attack exploits data from CPU memory Cisco and Huawei Routers hacked via backdoor attacks and botnets Finishing the Attack: Report and Withdraw
Read more
  • 0
  • 2
  • 18196
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-linux-5-2-releases-with-inclusion-of-sound-open-firmware-project-new-mount-api-improved-pressure-stall-information-and-more
Vincy Davis
09 Jul 2019
5 min read
Save for later

Linux 5.2 releases with inclusion of Sound Open Firmware project, new mount API, improved pressure stall information and more

Vincy Davis
09 Jul 2019
5 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.2 in his usual humorous way, describing it as a ‘Bobtail Squid’. The release has new additions like the inclusion of the Sound Open Firmware (SOF) project, improved pressure stall information, new mount API, significant performance improvements in the BFQ I/O scheduler, new GPU drivers, optional support for case-insensitive names in ext4 and more. The earlier version, Linux 5.1 was released exactly two months ago. Torvalds says, “there really doesn't seem to be any reason for another rc, since it's been very quiet. Yes, I had a few pull requests since rc7, but they were all small, and I had many more that are for the upcoming merge window. So despite a fairly late core revert, I don't see any real reason for another week of rc, and so we have a v5.2 with the normal release timing.” Linux 5.2 also kicks off the Linux 5.3 merge window. What’s new in Linux 5.2? Inclusion of Sound Open Firmware (SOF) project Linux 5.2 includes Sound Open Firmware (SOF) project, which has been created to reduce firmware issues by providing an open source platform to create open source firmware for audio DSPs. The SOF project is backed by Intel and Google. This will enable users to have open source firmware, personalize it, and also use the power of the DSP processors in their sound cards in imaginative ways. Improved Pressure Stall information With this release, users can configure sensitive thresholds and use poll() and friends to be notified, whenever a certain pressure threshold is breached within the user-defined time window. This allows Android to monitor and prevent mounting memory shortages, before they cause problems for the user. New mount API With Linux 5.2, Linux developers have redesigned the entire mount API, thus resulting in addition of six new syscalls: fsopen(2), fsconfig(2), fsmount(2), move_mount(2), fspick(2), and open_tree(2). The previous mount(2) interface was not easy for applications and users to understand the returned errors, was not suitable for the specification of multiple sources such as overlayfs need and it was not possible to mount a file system into another mount namespace. Significant performance improvements in the BFQ I/O scheduler BFQ is a proportional-share I/O scheduler available for block devices since the 4.12 kernel release. It associates each process or group of processes with a weight, and grants a fraction of the available I/O bandwidth to that proportional weight. In Linux 5.2, there have been performance tweaks to the BFQ I/O scheduler such that the application start-up time has increased under load by up to 80%. This drastically increases the performance and decreases the execution time of the BFQ I/O scheduler. New GPU drivers for ARM Mali devices In the past, the Linux community had to create open source drivers for the Mali GPUs, as ARM has never been open source friendly with the GPU drivers. Linux 5.2 has two new community drivers for ARM Mali accelerators, such that lima covers the older t4xx and panfrost the newer 6xx/7xx series. This is expected to help the ARM Mali accelerators. More CPU bug protection, and "mitigations" boot option Linux 5.2 release has more bug infrastructure added to deal with the Microarchitectural Data Sampling (MDS) hardware vulnerability, thus allowing access to data available in various CPU internal buffers. Also, in order to help users to deal with the ever increasing amount of CPU bugs across different architectures, the kernel boot option mitigations= has been added. It's a set of curated, arch-independent options to enable/disable protections regardless irrespective of the system they are running in. clone(2) to return pidfds Due to the design of Unix, sending signals to processes or gathering /proc information is not always safe due to the possibility of PID reuse. With clone(2) returning to pidfds, it will allow users to get pids at process creation time, which are usable with the pidfd_send_signal(2) syscall. pidfds helps Linux to avoid this problem, and the new clone(2) flag will make it even easier to get pidfs, thus providing an easy way to signal and process PID metadata safely. Optional support for case-insensitive names in ext4 This release implements support for case-insensitive file name lookups in ext4, based on the feature bit and the encoding stored in the superblock. This will enable users to configure directories with chattr +F (EXT4_CASEFOLD_FL) attribute. This attribute is only enabled on empty directories for filesystems that support the encoding feature, thus preventing collision of file names that differ by case. Freezer controller for cgroups v2 added A freezer controller provides an ability to stop the workload in a cgroup and temporarily free up some resources (cpu, io, network bandwidth and, potentially, memory) for some other tasks. Cgroup v2 lacked this functionality, until this release. This functionality is always available and is represented by cgroup.freeze and cgroup.events cgroup control files. Device mapper dust target added Linux 5.2 adds a device mapper 'dust' target to simulate a device that has failing sectors and/or read failures. It also adds the ability to enable the emulation of the read failures at an arbitrary time. The 'dust' target aims to help storage developers and sysadmins that want to test their storage stack. Users are quite happy with the Linux 5.2 release. https://twitter.com/ejizhan/status/1148047044864557057 https://twitter.com/konigssohne/status/1148014299484512256 https://twitter.com/YuzuSoftMoe/status/1148419200228179968 Linux 5.2 has many other performance improvements introduced in the file systems, memory management, block layer and more. Visit the kernelnewbies page, for more details. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!
Read more
  • 0
  • 0
  • 18194

article-image-exim-patches-a-major-security-bug-found-in-all-versions-that-left-millions-of-exim-servers-vulnerable-to-security-attacks
Amrata Joshi
09 Sep 2019
3 min read
Save for later

Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks

Amrata Joshi
09 Sep 2019
3 min read
Last week, a vulnerability was found in all the versions of Exim, a mail transfer agent (MTA), that when exploited can let attackers run malicious code with root privileges. According to the Exim team, all Exim servers running version 4.92.1 and the previous ones are vulnerable.  On September 4, the team at Exim published a warning on the Openwall information security mailing list regarding the critical security flaw that was affecting Exim. On Friday, the team at Exim released 4.92.2 to address this vulnerability. This vulnerability with the ID, CVE-2019-15846 was reported in July by a security researcher called Zerons. The vulnerability allows attackers to take advantage of the TLS ServerName Indicator and execute programs with root privileges on servers that accept TLS connections. An attacker can simply create a buffer overflow to gain access to a server running Exim as the bug doesn’t depend on the TLS library that is used by the server, both GnuTLS, as well as OpenSSL, get affected. It is used to serve around 57% of all publicly reachable email servers over the internet. Exim was initially designed for Unix servers, is currently available for Linux and Microsoft Corp. Windows and is also used for the email in cPanel.  Exim's advisory says, "In the default runtime configuration, this is exploitable with crafted ServerName Indication (SNI) data during a TLS negotiation.”  Read Also: A year-old Webmin backdoor revealed at DEF CON 2019 allowed unauthenticated attackers to execute commands with root privileges on servers Server owners can mitigate by disabling TLS support for the Exim server but it would expose email traffic in cleartext and would make it vulnerable to sniffing attacks and interception. Also, this mitigation plan can be more dangerous for the Exim owners living in the EU, since it might lead their companies to data leaks, and the subsequent GDPR fines. Also, Exim installations do not have the TLS support enabled by default but the Exim instances with Linux distros ship with TLS enabled by default.  Exim instances that ship with cPanel also support TLS by default but the cPanel staff have moved towards integrating the Exim patch into a cPanel update that they already started rolling it out to customers. Read Also: A vulnerability found in Jira Server and Data Center allows attackers to remotely execute code on systems A similar vulnerability named as CVE-2019-13917 was found in July that impacted Exim 4.85 up to and including 4.92 and got patched with the release of 4.92.1. Even this vulnerability would allow remote attackers to execute programs with root privileges. In June, the team at Exim had patched CVE-2019-10149, a vulnerability that is called "Return of the Wizard," that allowed attackers to run malicious code with root privileges on remote Exim servers. Also, Microsoft had issued a warning in June regarding a Linux worm that was targeting Azure Linux VMs that were running vulnerable Exim versions. Most of the users are sceptical about the meditation plan as they are not comfortable around disabling the TLS as the mitigation option. A user commented on HackerNews, “No kidding? Turning off TLS isn't an option at many installations. It's gotta work.” Other interesting news in Security  CircleCI reports of a security breach and malicious database in a third-party vendor account Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks    
Read more
  • 0
  • 0
  • 18193

article-image-unity-releases-ml-agents-toolkit-v0-5-with-gym-interface-a-new-suite-of-learning-environments
Sugandha Lahoti
12 Sep 2018
2 min read
Save for later

Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments

Sugandha Lahoti
12 Sep 2018
2 min read
In their commitment to become the go-to platform for Artificial Intelligence, Unity has released a new version of their ML-Agents Toolkit.  ML-Agents toolkit v0.5 comes with more flexible action specification, a Gym interface for researchers to more easily integrate ML-Agents environments into their training workflows, and a new suite of learning environments replicating some of the Continuous Control benchmarks used in Deep Reinforcement Learning. They have also released a research paper on ML-Agents which the Unity platform has titled “Unity: A General Platform for Intelligent Agent.” Changes to the ML-Agents toolkit v0.5 A lot of changes have been made pertaining to ML-Agents toolkit v0.5. Highlighted changes to repository structure The python folder has been renamed ml-agents. It now contains a python package called mlagents. The unity-environment folder, containing the Unity project, has been renamed UnitySDK. The protobuf definitions used for communication have been added to a new protobuf-definitions folder. Example curricula and the trainer configuration file have been moved to a new config sub-directory. New features New package gym-unity which provides gym interface to wrap UnityEnvironment. The ML-Agents toolkit v0.5 can now run multiple concurrent training sessions with the --num-runs=<n> command line option. Added Meta-Curriculum which supports curriculum learning in multi-brain environments. Action Masking for Discrete Control which makes it possible to mask invalid actions each step to limit the actions an agent can take. Fixes & Performance Improvements Replaced some activation functions to swish. Visual Observations use PNG instead of JPEG to avoid compression losses. Improved python unit tests. Multiple training sessions are available on single GPU. Curriculum lessons are now tracked correctly. Developers can now visualize value estimates when using models trained with PPO from Unity with GetValueEstimate(). It is now possible to specify which camera the Monitor displays to. Console summaries will now be displayed even when running inference mode from python. Minimum supported Unity version is now 2017.4. You can read all about the new version of ML-Agents Toolkit on the Unity Blog. Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence. Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more.
Read more
  • 0
  • 0
  • 18185

article-image-salesforces-open-sourcing-centrifuge-a-library-for-accelerating-jvm-restarts
Amrata Joshi
02 Nov 2018
3 min read
Save for later

Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts

Amrata Joshi
02 Nov 2018
3 min read
Yesterday, Paymon Teyer, a principal member of Technical Staff at Salesforce, introduced Centrifuge as a library, which is also a framework for scheduling and running startup and warmup tasks. It focuses mainly on accelerating JVM restarts. It also provides an interface for implementing warmup tasks, like, calling an HTTP endpoint, populating caches and handling pre-compilation tasks for generated code. When the JVM restarts in a production environment, it affects the performance of the server. The JVM has to reload classes, trigger reflection inflation, rerun its JIT compiler on any code paths, reinitialize objects and dependency injections, and populate component caches. The performance impact of JVM restarts can be minimized by allowing individual components to execute arbitrary warmup logic themselves, after a cold start. To make this possible, Centrifuge was created with the goal of executing warmup tasks. It also manages resource usage and handles failures. Centrifuge allows users to register and configure warmup tasks either descriptively or programmatically. It also schedules tasks, manages and monitors threads, handles exceptions and retries, and provides status reports. Centrifuge supports the following two categories of warmup tasks: Blocking tasks Blocking tasks prevent the application from returning to the available server pool until they complete. These tasks must be executed for the application to function properly. For example, executing source code generators or populating a cache from storage to meet SLA requirements. Non-blocking tasks Non- blocking tasks execute asynchronously and don’t interfere with the application’s readiness. These tasks do the work which is needed after an application restarts but is not required immediately for the application to be in a consistent state. Examples include warmup logic that triggers JIT compilation on code paths or eagerly triggering dependency injection and object creation. How to Use Centrifuge? The first step is to include a Maven dependency for Centrifuge in the POM Then implementing the Warmer interface for each of the warmup tasks. The warmer class should have an accessible default constructor and it should not swallow InterruptedException. The warmers can be registered either programmatically with code or descriptively with a configuration file. For adding and removing warmers without recompiling, the warmers should be registered descriptively within a configuration file. Then the configuration file needs to be loaded into the Centrifuge. How is the HTTP Warmer useful? Centrifuge provides a simple HTTP warmer which is used to call HTTP endpoints to trigger code path. It is also exercised by the resource implementing the endpoint. If an application provides a homepage URL and when called, connects to a database, populates the caches, etc., then the HTTP warmer can warm these code paths. Read more about Centrifuge on Salesforce’s official website. About Java Virtual Machine – JVM Languages Tuning Solr JVM and Container Concurrency programming 101: Why do programmers hang by a thread?
Read more
  • 0
  • 0
  • 18179
article-image-microsoft-is-going-to-acquire-github
Richard Gall
04 Jun 2018
2 min read
Save for later

Microsoft is going to acquire GitHub

Richard Gall
04 Jun 2018
2 min read
In one of the most interesting developments in tech for some time (and that's saying something), Bloomberg are reporting that Microsoft has acquired GitHub. Spokespeople from Microsoft and GitHub declined to comment when asked by Bloomberg, but the deal could be announced later today. With 24 million users on the platform, this move could well have an impact across the software world. However, while it may seem surprising, it isn't perhaps quite as shocking as it immediately appears. Microsoft has embraced open source in the last few years; the company is one of the top contributors to the site, according to The Verge. When were rumors of Microsoft's intention to buy GitHub first reported? Reports of Microsoft's intention to acquire GitHub were first made in Business Insider just a few days ago, at the beginning of June 2018. According to the website, sources 'close to both companies' said that serious talks have been happening for the past few months. Informal discussions on the issue have taken place over the last few years - it's only now that they have become more serious. With GitHub's CEO Chris Wanstrath set to leave in August, it makes sense for Microsoft to take the opportunity to make a move to acquire the company now. Why would Microsoft want to acquire GitHub? Microsoft has been playing catch up with the open source revolution. It's attitude towards open source has changed significantly in recent years. It has open sourced a growing number of its tools, including PowerShell, Visual Studio Code and .NET. Back in 2001, former Microsoft CEO Steve Ballmer called Linux a "cancer" (he did later retract his statement). Today, under Satya Nadella, it's a completely different story. For that reason, the acquisition of GitHub represents an important step in the evolution of Microsoft's relationship to the open source world. There are questions around how much Microsoft is really committed to open source. To cynics, embracing open source is as much about business than values. Billion dollar acquisitions don't exactly scream 'free and open software'. However, it is still early days. How the acquisition unfolds, how it will be received by the developer community will be interesting. Whatever you think of the Microsoft's move, GitHub isn't exactly thriving from a business perspective; GitHub lost $66 million in three quarters in 2016. Read next 10 years of GitHub Microsoft releases Windows 10 Insider build 17682! Epicor partners with Microsoft Azure to adopt Cloud ERP
Read more
  • 0
  • 0
  • 18165

article-image-amazon-sagemaker-machine-learning-service
Sugandha Lahoti
01 Dec 2017
3 min read
Save for later

Amazon unveils Sagemaker: An end-to-end machine learning service

Sugandha Lahoti
01 Dec 2017
3 min read
Machine Learning was one of the most talked about topic at the Amazon’s re:invent this year. In order to make machine learning models accessible to everyday users, regardless of their expertise level, Amazon Web services launched an end-to-end machine learning service – Sagemaker. Amazon Sagemaker allows data scientists, developers, and machine learning experts to quickly build, train, and deploy machine learning models at scale. The below image shows the process adopted by Sagemaker to aid developers in building ML models. Source: aws.amazon.com Model Building Amazon SageMaker makes it easy to build ML models by easy training and selection of best algorithms and frameworks for a particular model. Amazon Sagemaker has zero-setup hosted Jupyter notebooks which makes it easy to explore, connect, and visualize the training data stored on Amazon S3. These notebook IDEs are runnable on either general instance types or GPU powered instances. Model Training ML models can be trained by a single click in the Amazon SageMaker console. For training the data, Sagemaker also has a provision for moving training data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3. Amazon Sagemaker is preconfigured to run TensorFlow and Apache MXNet. However, developers can use their own frameworks and also create their own training with Docker containers. Model Tuning and Hosting Amazon Sagemaker has a model hosting service with HTTPs endpoints. These endpoints can invoke real-time inferences, support traffic, and simultaneously allow A/B Testing. Amazon Sagemaker can automatically tune models to achieve high accuracy. This makes the training process faster and easier. Sagemaker can automate the underlying infrastructure and allows developers to easily scale to train models at petabyte scale. Model Deployment After training and tuning come the deployment phase. Sagemaker deploys the models on an auto-scaling cluster of Amazon EC2 instances, for running predictions on new data. These high-performance instances are spread across multiple availability zones. According to the official product page, Amazon Sagemaker has multiple use cases. One of them being Ad targeting, where Amazon Sagemaker can be used with other AWS services to help build, train, and deploy ML models for targeting online ads, optimize return on ad spend, customer segmentation, etc. Another interesting use case of Sagemaker is how it can train recommender systems within its serverless, distributed environment which can be hosted easily in low-latency, auto-scaling endpoint systems. Sagemaker can also be used for building highly efficient Industrial IoT and ML models to predict machine failure or for maintenance scheduling. As of now, Amazon Sagemaker is free for developers for the first two months. Each month developers are provided with 250 hours of t2.medium notebook usage, 50 hours of m4.xlarge usage for training, and 125 hours of m4.xlarge usage for hosting. After the free period, the pricing would vary by region and customers would be billed per-second for instance usage, per-GB of storage, and per-GB of Data transfer into and out of the service. AWS Sagemaker provides an end-to-end solution for the development of machine learning applications. The ease and flexibility offered by AWS Sagemaker could be harnessed by developers to solve several business-related problems.
Read more
  • 0
  • 0
  • 18162

article-image-deeplearning4j-1-0-0-alpha-arrives
Sunith Shetty
09 Apr 2018
4 min read
Save for later

Deeplearning4j 1.0.0-alpha arrives!

Sunith Shetty
09 Apr 2018
4 min read
The Skymind team has announced a milestone release of Eclipse Deeplearning4j (DL4J), an open-source library for deep learning. DL4J 1.0.0-alpha has some breakthrough changes which will ease development of deep learning applications using Java and Scala. From a developer’s perspective, the roadmap provides an exciting opportunity to perform complex numerical computations with the major updates done to each module of Deeplearning4j. DL4J is a distributed neural network library in Java and Scala which allows distributed training on Hadoop and Spark. It provides powerful data processing that enables efficient use of CPUs and GPUs. With new features, bug fixes and optimizations in the toolkit, Deeplearning4j provides excellent capabilities to perform advanced deep learning tasks. Here are some of the significant changes available in DL4J 1.0.0-alpha: Deeplearning4j: New changes made to the framework Enhanced and new layers added to the DL4J toolkit. Lots of new API changes to optimize the training, building, and deploying neural network models in the production environment. A considerable amount of bug fixes and optimizations are done to the DL4J toolkit. Keras 2 import support Now you can import Keras 2 models into DL4J, while still keeping backward compatibility for Keras 1. The older module DL4J-keras and Model API in DL4J version 0.9.1 is removed. In order to import Keras models, the only entry point you can use is KerasModelImport.   Refer DL4J-Keras import support to know more about the complete list of updates. ND4J: New features A powerful library used for scientific and numerical computing for the JVM: Hundreds of new operations and features added to ease scientific computing, an essential building block for deep learning tasks. Added NVIDIA CUDA support for 9.0/9.1. They are continuing support for CUDA 8.0, however dropping support for CUDA 7.5. New API changes are done to the ND4J library. ND4J: SameDiff There is a new Alpha release of SameDiff, which is an auto-differentiation engine for ND4J. It supports two execution modes for serialized graphs: Java-driven execution, and Native execution. It also supports import of TensorFlow and ONNX graphs for inference purposes. You can know all the other new features at SameDiff release notes. DataVec: New features An effective ETL library for getting data into the pipeline, so neural networks can understand: Added new features and bug fixes to perform efficient and powerful ETL processes. New API changes incorporated in the DataVec library. Arbiter: New features A package for efficient optimization of neural networks to obtain good performance: New Workspace support added to carry out hyperparameter optimization of machine learning models. New layers and API changes have been done to the tool. Bug fixes and improvements for optimized tuning performances. A complete list of changes is available on Arbiter release notes. RL4J: New features A reinforcement learning framework integrated with deeplearning4j for the JVM: Added support for LSTM layers to asynchronous advantage actor-critic (A3C) models. Now you can use the latest version of VizDoom since MDP for Doom is updated. Lots of fixes and improvements implemented in the RL4J framework. ScalNet A scala wrapper for DL4J resembling a Keras like API for deep learning: New ScalNet Scala API is released which is very much similar to Keras API. It supports Keras based sequential models. The project module closely resembles both DL4J model-import module and Keras. Refer ScalNet release notes, if you like to know more. ND4S: N-Dimensional Arrays for Scala An open-source Scala bindings for ND4J: ND4S now has Scala 2.12 support Possible issues with the DL4J 1.0.0-alpha release Since this is an alpha release, you may encounter performance related issues and other possible issues (when compared to DL4J version 0.9.1). This will be addressed and rectified in the next release. Support for training a Keras model in DL4J is still very limited. This issue will be handled in the next release. To know more, you can refer Keras import bug report. Major new operations added in ND4J still do not use GPU yet. The same applies to the new auto-differentiation engine for ND4J. We can expect more improvements and new features on DL4J 1.0.0 roadmap. For the full list of updates, you can refer the release notes.  Check out other popular posts: Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss Top 10 Deep learning frameworks    
Read more
  • 0
  • 0
  • 18161
article-image-firefox-60-exciting-updates-web-developers
Sugandha Lahoti
15 May 2018
2 min read
Save for later

Firefox 60 arrives with exciting updates for web developers: Quantum CSS engine, new Web APIs and more

Sugandha Lahoti
15 May 2018
2 min read
Today, web developers are greeted with a new update of the popular Firefox web browser. Firefox 60 hosts a variety of feature additions and updates targeted specifically to the web developer community. Quantum CSS for Android available now Firefox has brought their new CSS engine called Quantum CSS (previously known as Stylo) in Firefox for Android. This engine takes advantage of modern hardware, parallelizing the work across all of the cores in your machine running upto almost 18 times faster. New Web APIs Two new Web APIs have been added. The Web Authentication API has been enabled which allows USB tokens for website authentication. The WebVR API has also been enabled by default on macOS. It provides support for exposing virtual reality devices to web apps. Firefox 60 also brings a new policy engine and Group Policy support for customized enterprise deployments, using Windows Group Policy or a cross-platform JSON file. Changes in Javascript ECMAScript 2015 modules have been enabled by default. The Array.prototype.values() method has been added again. It was disabled due to compatibility issues in earlier versions. Changes in CSS The align-content, align-items, align-self, justify-content, and place-content property values have been updated as per the latest CSS Box Alignment Module Level 3 spec. The paint-order property has been implemented. Changes in Developer Tools In the CSS Pane rules view, the keyboard shortcuts for precise value increments (increase/decrease by 0.1) have changed from Alt + Up/Down to Ctrl + Up/Down and the CSS variable names will now auto-complete. In Responsive Design Mode, a Reload when... dropdown has been added to allow users to enable/disable automatic page reloads when touch simulation is toggled, or simulated user agent is changed. Changes in DOM The dom.workers.enabled pref has been removed, meaning workers can no longer be disabled. PerformanceResourceTiming is now available in workers. The PerformanceObserver.takeRecords() method has been implemented. The Animation.updatePlaybackRate() method has been implemented. The Gecko-only options object storage option of the IDBFactory.open() method has been deprecated. Promises can now be used within IndexedDB code. The entire list of developer centric changes are available on the Mozilla Developer page. You can also file a bug in Bugzilla or see the system requirements of this release. Get ready for Bootstrap v4.1; Web developers to strap up their boots npm v6 is out! What’s new in ECMAScript 2018 (ES9)?
Read more
  • 1
  • 0
  • 18159

article-image-nsa-researchers-present-security-improvements-for-zephyr-and-fucshia-at-linux-security-summit-2018
Bhagyashree R
04 Sep 2018
5 min read
Save for later

NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018

Bhagyashree R
04 Sep 2018
5 min read
Last week, James Carter and Stephen Smalley presented the architecture and security mechanisms of two operating systems, Zephyr and Fuchsia at the Linux Security Summit 2018. James and Stephen are computer security researchers in the Information Assurance Research organization of the US National Security Agency (NSA). They discussed the current concerns in the operating systems and their contribution and others to further advance security of these emerging open source operating systems. They also compared the security features of Zephyr and Fucshia to Linux and Linux-based systems such as Android. Zephyr Zephyr is a scalable real-time operating system (RTOS) for IoT devices, supporting cross-architecture with security as the main focus. It targets devices that are resource constrained seeking to be a new "Linux" for little devices. Protection mechanisms in Zephyr Zephyr introduced basic hardware-enforced memory protections in the v1.8 release and these were officially supported in the v1.9 releases. The microcontrollers should either have a memory protection unit (MPU) or a memory management unit (MMU) to support these protection mechanisms. These mechanisms provide protection by the following ways: They enforce Read Only/No Execute (RO/NX) restrictions to protect the read-only data from tampering. Provides runtime support for stack depth overflow protections. The researchers’ contribution was to review the basic memory protections and also develop a set of kernel memory protection tests that were modeled after subset of lkdtm tests in Linux from KSPP. These tests were able to detect bugs and regression in Zephyr MPU drivers and are now a part of the standard regression testing that Zephyr performs on all future changes. Userspace support in Zephyr In previous versions, everything ran in a supervisor mode, so Zephyr introduced a userspace support in v1.10 and v1.11. This requires the basic memory protection support and MPU/MMU. It provides basic support for user mode threads with isolated memory. The researchers contribution, here, was to develop userspace tests to verify some of the security-relevant properties for user mode threads, confirm the correctness of x86 implementation, and validate initial ARM and ARC userspace implementations. App shared memory: A new feature contributed by the researchers Originally, Zephyr provided an access to all the user threads to the global variables of all applications. This imposed high burden on application developers to, Manually organize application global variable memory layout to meet (MPU-specific) size/alignment restrictions. Manually define and assign memory partitions and domains. To solve this problem, the researchers developed a new feature which will come out in v1.13 release, known as App Shared Memory, having features: It is a more developer-friendly way of grouping application globals based on desired protections. It automatically generates linker script, section markings, memory partition/domain structures. Provides helpers to ease application coding. Fucshia Fucshia is an open source microkernel-based operating system, primarily developed by Google. It is based on a new microkernel called Zircon and targets modern hardware such as phones and laptops. Security mechanisms in Fucshia Microkernel security primitives Regular handles: Through handles, userspace can access kernel objects. They can identify both the object and a set of access rights to the object. With proper rights, one can duplicate objects, pass them across IPC, and obtain handles to child objects. Some of the concerns pointed out in regular handles are: If you have a handle to a job, you can get handle to anything in the job using object_get_child() Leak of root job handle Refining default rights down to least privilege Not all operations check access rights Some rights are unimplemented, currently Resource handles: These are a variant of handles for platform resources such as, memory mapped I/O, I/O port, IRQ, and hypervisor guests. Some of the concerns pointed out in resource handles are: Coarse granularity of root resource checks Leak of root resource handle Refining root resource down to least privilege Job policy: In Fucshia, every process is a part of a job and these jobs can further have child jobs. Job policy is applied to all processes within the job. These policies include error handling behavior, object creation, and mapping of WX memory. Some of the concerns pointed out in job policies are: Write execute (WX) is not yet implemented Inflexible mechanism Refining job policies down to least privilege vDSO (virtual dynamic shared object) enforcement: This is the only way to invoke system calls and is fully read-only. Some of the concerns pointed out in vDSO enforcement are: Potential for tampering with or bypassing the vDSO, for example, processs_writes_memory() allows you to overwrite the vDSO Limited flexibility, for example,  as compared to seccomp Userspace mechanisms Namespaces: It is a collection of objects that you can enumerate and access. Sandboxing: Sandbox is the configuration of a process’s namespace created based on its manifest. Some of the concerns pointed out in namespaces and sandboxing are: Sandbox only for application packages (and not system services) Namespace and sandbox granularity No independent validation of sandbox configuration Currently uses global /data and /tmp To address the aforementioned concerns the researchers suggested a MAC framework. It could help in the following ways: Support finer-grained resource checks Validate namespace/sandbox It could help control propagation, support revocation, apply least privilege Just like in Android, it could provide a unified framework for defining, enforcing, and validating security goals for Fuchsia. This was a sneak peek from the talk. To know more about the architecture, hardware limitations, security features of Zephyr and Fucshia in detail, watch the presentation on YouTube: Security in Zephyr and Fucshia - Stephen Smalley & James Carter, National Security Agency. Cryptojacking is a growing cybersecurity threat, report warns Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 18153
Modal Close icon
Modal Close icon