Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 11578

article-image-ruby-2-6-0-released-with-a-new-jit-compiler
Prasad Ramesh
26 Dec 2018
2 min read
Save for later

Ruby 2.6.0 released with a new JIT compiler

Prasad Ramesh
26 Dec 2018
2 min read
Ruby 2.6.0 was released yesterday and brings a new JIT compiler. The new version also has the RubyVM::AbstractSyntaxTree module. The new JIT compiler in Ruby 2.6.0 Ruby 2.6.0 comes with an early implementation of a Just-In-Time (JIT) compiler. The JIT compiler was introduced in Ruby to improve the performance of programs made with Ruby. Traditional JIT compilers operate in-process but Ruby’s JIT compiler gives out C code to the disk and generates a common C compiler to create native code. To enable the JIT compiler, you just need to specify --jit either on the command line or in the $RUBYOPT environment variable. Using --jit-verbose=1 will cause the JIT compiler to print additional information. The JIT compiler will work only when Ruby is built by GCC, Clang, or Microsoft Visual C++. Any of these compilers need to be available at runtime. On Optcarrot, a CPU intensive benchmark, Ruby 2.6 has 1.7x faster performance compared to Ruby 2.5. The JIT compiler, however, is still experimental and workloads like Rails might not benefit from for now. The RubyVM::AbstractSyntaxTree module Ruby 2.6 brings the RubyVM::AbstractSyntaxTree module and the team does not guarantee any future compatibility of this module. The module has a parse method, which parses the given string as Ruby code and returns the Abstract Syntax Tree (AST) nodes in the code. The given file is opened and parsed by the parse_file method as Ruby code, this returns AST nodes. A RubyVM::AbstractSyntaxTree::Node class—another experimental feature—is also introduced in Ruby 2.6.0. Developers can get source location and children nodes from the Node objects. To know more about other new features and improvements in detail, visit the Ruby 2.6.0 release notes. 8 programming languages to learn in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility NumPy drops Python 2 support. Now you need Python 3.5 or later.
Read more
  • 0
  • 0
  • 16110

article-image-italian-researchers-conduct-an-experiment-to-prove-that-quantum-communication-is-possible-on-a-global-scale
Prasad Ramesh
26 Dec 2018
3 min read
Save for later

Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale

Prasad Ramesh
26 Dec 2018
3 min read
Researchers from Italy have published a research paper showcasing that quantum communication is feasible between high-orbiting satellites and a station on the ground. This new research proves that quantum communication is possible on a global scale by using a Global Navigation Satellite System (GNSS). The reports of the study are presented in a paper published last week titled Towards quantum communication from global navigation satellite system. In the experiment conducted, a single photon was exchanged over a distance of 20,000km between a ground station and a high-orbit satellite. The exchange was between the retroreflector array mounted on Russian GLONASS satellites and the Space Geodesy Centre on the Earth, Italian space agency. The challenge in high-orbit satellites is that the distance causes high diffraction losses in the channel. One of the co-authors, Dr. Giuseppe Vallone, University of Padova said to IOP Publishing: “Satellite-based technologies enable a wide range of civil, scientific and military applications like communications, navigation and timing, remote sensing, meteorology, reconnaissance, search and rescue, space exploration and astronomy.” He mentions that the crux of such systems is to safely transmit information from satellites in the air to the ground. It is important that these channels be protected from interference by third parties. “Space quantum communications (QC) represents a promising way to guarantee unconditional security for satellite-to-ground and inter-satellite optical links, by using quantum information protocols as quantum key distribution (QKD).” The quantum key distribution (QKD) protocols used in the experiment guarantee strong security for communication between satellites and satellites to Earth. In QKD, data is encrypted using quantum mechanics and interferences are detected quickly. Another co-author, Prof. Villoresi talks to IOP Publishing about their focus on high-orbit satellites despite the challenges: "The high orbital speed of low earth orbit (LEO) satellites is very effective for the global coverage but limits their visibility periods from a single ground station. On the contrary, using satellites at higher orbits can extend the communication time, reaching few hours in the case of GNSS.” After the experiments, the researchers estimated the requirements needed for an active source on a GNSS satellite. They aim towards QC from GNSS with state-of-the-art technology. This does not really mean faster internet/communication as only a single photon was transmitted in the experiment. This means that transferring large amounts of data quickly, i.e., faster internet is not likely gonna happen with this application. However, it does show that data transmission can be done over a large distance with a secure channel. For more details, you can check out the research paper on the IOPSCIENCE website. The US to invest over $1B in quantum computing, President Trump signs a law UK researchers build the world’s first quantum compass to overthrow GPS Quantum computing – Trick or treat?
Read more
  • 0
  • 0
  • 13186

article-image-blender-2-8-released-with-a-revamped-user-interface-and-a-high-end-viewport-among-others
Natasha Mathur
26 Dec 2018
2 min read
Save for later

Blender 2.8 beta released with a revamped user interface, and a high-end viewport among others

Natasha Mathur
26 Dec 2018
2 min read
The Blender team released beta version 2.8 of its Blender, a free and open-source 3D creation software, earlier this week. Blender 2.8 beta comes with new features and updates such as EEVEE, a high-end Viewport, Collections, Cycles, and 2D animation among others. Blender is a 3D creation suite that offers the entirety of the 3D pipeline including modeling, rigging, animation, simulation, rendering, compositing, and motion tracking. It allows video editing as well as game creation. What’s new in Blender 2.8 Beta? EEVEE Blender 2.8 beta comes with EEVEE, a new physically based real-time renderer. EEVEE works as a renderer for final frames, and also as the engine driving Blender’s real-time viewport. It consists of advanced features like volumetrics, screen-space reflections and refractions, subsurface scattering, soft and contact shadows, depth of field, camera motion blur and bloom. A new 3D Viewport There's a new and modern 3D viewport that was completely rewritten. It can help optimize the modern graphics cards as well as add powerful new features. It consists of a workbench engine that helps visualize your scene in flexible ways. EEVEE also helps power the viewport to enable interactive modeling and painting with PBR materials. 2D Animation There are a new and improved 2D drawing capabilities, which include a new Grease Pencil. Grease Pencil is a powerful and new 2D animation system that was added, with a native 2D grease pencil object type, modifier, and shader effects. In a nutshell, it helps to create a user-friendly interface for the 2D artist. Collections Blender 2.8 beta introduces ‘collections’, a new concept that lets you organize your scene with the help of Collections and View Layers. Cycles Blender 2.8 beta comes with a new feature called Cycles that includes new principled volume and hair shaders, bevel and ambient occlusion shaders, along with many other improvements and optimizations. Other features Dependency Graph: In blender 2.8 beta, the core object evaluation and computation system have been rewritten. Blender offers better performance for modern many-core CPUs as well as for new features in the future releases. Multi-object editing: Blender 2.8 beta comes with multiple-object editing that allows you to enter edit modes for multiple objects together. For more information, check out the official Blender 2.8 beta release notes. Mozilla partners with Khronos Group to bring glTF format to Blender Building VR objects in React V2 2.0: Getting started with polygons in Blender Blender 2.5: Detailed Render of the Earth from Space
Read more
  • 0
  • 0
  • 23523

article-image-internal-memo-reveals-nasa-suffered-a-data-breach-compromising-employees-social-security-numbers
Melisha Dsouza
26 Dec 2018
3 min read
Save for later

Internal memo reveals NASA suffered a data breach compromising employees social security numbers

Melisha Dsouza
26 Dec 2018
3 min read
On 18th December, an internal HR memo was sent out to all NASA employees by Bob Gibbs, assistant administrator for the office of human capital management, alerting them of a possible compromise to their servers in late October. The memo was shared by SpaceRef and it states that servers stored personally identifiable information about NASA employees, including their social security numbers. What is surprising is that NASA learned of the incident in October 2018 but chose to remain silent till the memo was rolled out. Bill says in the memo that the space agency took immediate steps to contain the breach and that the investigation is still ongoing. The scope of the breach is unclear. The memo states that NASA is ‘examining the servers to determine the scope of the potential data exfiltration and identify potentially affected individuals’. This message is sent to all NASA employees, regardless of whether or not their information may have been compromised. NASA Civil Service employees who were on-boarded, separated from the agency, and/or transferred between centers, from July 2006 to October 2018, may also have been affected. NASA’s Office of Inspector General (OIG) has continually criticized the space agencies cybersecurity practices, reporting shortfalls in NASA’s overall information technology (IT) management. The office stated in its latest semi-annual report, dated Oct. 31: “Through its audits, the OIG has identified systemic and recurring weaknesses in NASA’s IT security program that adversely affect the Agency’s ability to protect the information and information systems vital to its mission.” In May, the OIG published The audit of NASA’s Security Operations Center (SOC) and found several issues with the center, right from high management turnover to a lack of formal authority to manage information security issues for some parts of the agency. An October 2017 report stated that “Lingering confusion about security roles coupled with poor IT inventory practices continues to negatively impact NASA’s security posture.” According to Hacker News, this is not the first time when the agency's servers have been hacked into. NASA suffered a massive security breach in 2016 where 276GB of sensitive data was released. This data included flight logs and credentials of thousands of its employees. All these facts draw attention to the poor security practices followed at NASA. It will be interesting to see how NASA will deal with this security breach and what measures it will take to secure its systems to prevent future cyber attacks. Head over to SpaceNews.com to know more about this news. Justice Department’s indictment report claims Chinese hackers breached business  and government network Former Senior VP’s take on the Mariott data breach; NYT reports suspects Chinese hacking ties Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report
Read more
  • 0
  • 0
  • 11552

article-image-facebook-introduces-a-fully-convolutional-speech-recognition-approach-and-open-sources-wav2letter-and-flashlight
Bhagyashree R
24 Dec 2018
3 min read
Save for later

Facebook introduces a fully convolutional speech recognition approach and open sources wav2letter++ and flashlight

Bhagyashree R
24 Dec 2018
3 min read
Last week, Facebook AI Research (FAIR) speech team introduced the first fully convolutional speech recognition approach. Additionally, they have also open-sourced flashlight, a C++ library for machine learning and wav2letter++, a fast and simple system for developing end-to-end speech recognizers. Fully convolutional speech recognition approach The current state-of-the-art-speech recognition systems are built on RNNs for acoustic or language modeling. Facebook’s newly-introduced system provides an alternative approach based solely on convolutional neural networks. This system eliminates the feature extraction step altogether as it is trained end-to-end to predict characters from the raw waveform. It uses an external convolutional language model to decode words. The following diagram depicts the architecture of this CNN-based speech recognition system: Source: Facebook Learnable frontend: This section of the system first contains a convolution of width 2 that emulates the pre-emphasis step followed by a complex convolution of width 25 ms. After calculating the squared absolute value, the low-pass filter and stride perform the decimation. The frontend finally applies a log-compression and a per-channel mean-variance normalization. Acoustic model: It is a CNN with gated linear units (GLU), which is fed with the output of the learnable frontend. These acoustic models are trained to predict letters directly with the Auto Segmentation Criterion. Language model: The convolutional language model (LM) contains 14 convolutional residual blocks and uses GLUs as the activation function. It is used to score candidate transcriptions in addition to the acoustic model in the beam search decoder. Beam-search decoder: The beam-search decoder is used to generate word sequences given the output from our acoustic model. Apart from this CNN-based approach, Facebook released the wav2letter++ and flashlight frameworks to complement this approach and enable reproducibility. flashlight is a C++ standalone library for machine learning. It uses the ArrayFire tensor library and features just-in-time compilation with modern C++. It targets both CPU and GPU backends to provide maximum efficiency and scale. The wav2letter++ toolkit is built on top of flashlight and written entirely in C++. It also uses ArrayFire as its primary library for tensor operations. ArrayFire is a highly optimized tensor library that can execute on multiple backends including a CUDA GPU and CPU backed. It supports multiple audio file formats such as wav and flac. And, also supports several feature types including the raw audio, a linearly scaled power spectrum, log-Mels (MFSC) and MFCCs. To read more in detail, check out Facebook’s official announcement. Facebook halted its project ‘Common Ground’ after Joel Kaplan, VP, public policy, raised concerns over potential bias allegations Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal
Read more
  • 0
  • 0
  • 13163
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mozilla-launches-a-redesigned-mozilla-labs
Prasad Ramesh
24 Dec 2018
2 min read
Save for later

Mozilla launches a redesigned Mozilla Labs

Prasad Ramesh
24 Dec 2018
2 min read
Mozilla relaunched Mozilla Labs in a new avatar last week. This website contains Mozilla’s latest innovations and creations. They launched the Mozilla labs website in a new domain after the old one was no longer updated. Mozilla calls it a digital research laboratory. They examine new technologies and test what works and what doesn’t. Some projects from Mozilla labs become new Mozilla products that will be launched while others will be explored further. Their foundation is the Mozilla Manifesto and a commitment to a healthy internet. Some of the items on Mozilla labs right now are: A WebXR Viewer for iOS with which users get a preview of experiencing augmented reality (AR) from inside a web browser. Ability to create new virtual environments with Spoke. Users can then share the experience with friends by using Mozilla Hubs. Make contributions to Common Voice, where Mozilla helps voice systems understand voices of people from diverse backgrounds. It also puts expensive voice data at the hands of independent creators. Start with Project Things, where a decentralized ‘Internet of Things’ is being built with a focus on security, privacy, and interoperability. Install and try Firefox Reality to browse the immersive web completely in virtual reality. These were some of the technologies Mozilla has worked on in 2018. As they prepare for 2019, they will continue to innovate across platforms such as Virtual Reality, Augmented Reality, Internet of Things, Artificial Intelligence, and many more. To know more, check out the Mozilla Labs website. You can also contribute to their projects on GitHub. Mozilla releases Firefox 64 and Firefox 65 beta The State of Mozilla 2017 report focuses on internet health and user privacy Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web
Read more
  • 0
  • 0
  • 3108

article-image-nvidia-launches-geforce-nows-gfn-recommended-router-program-to-enhance-the-overall-performance-and-experience-of-gfn
Natasha Mathur
24 Dec 2018
2 min read
Save for later

NVIDIA launches GeForce Now’s (GFN) 'recommended router' program to enhance the overall performance and experience of GFN

Natasha Mathur
24 Dec 2018
2 min read
NVIDIA launched a ‘recommended router’ program last week to improve the overall experience of its GeForce Now (GFN) cloud gaming service for PC and Mac. The GeForce NOW game-streaming service has transformed the user experience when it comes to playing high-performance games. NVIDIA has now come out with a few enhancements in beta mode to improve the quality of its service, using its ‘recommended router program’. The recommended router program comes comprises the latest generation routers for cloud-gaming in the home for video streaming and downloading. These routers enable the users to configure its settings in a way that it prioritizes GeForce NOW over all the other data. Recommended routers get certified as “factory-enabled” with a GeForce NOW “quality of service (QoS) profile” that makes sure that your cloud game playing is at its best quality. The router settings get automatically loaded once the GeForce Now launches. Network latency which is the biggest drawback on cloud gaming is quite low with these routers and also better streaming speeds are offered for GeForce NOW. “We’re working closely with ASUS, D-LINK, Netgear, Razer, TP-Link, Ubiquiti Networks and other router manufacturers to build GeForce NOW recommended routers. They’re committed to building best-in-class cloud gaming routers — just as we’re committed to delivering best-in-class gaming experiences,” says the NVIDIA team. GFN recommended routers are now available in the U.S. and Canada starting with Amplifi HD Gamer’s Edition by Ubiquiti Networks. Amplifi makes use of multiple self-configuring radios and advanced antenna technology that helps it deliver a powerful, whole-home Wi-Fi coverage. For more information, read the official NVIDIA blog. NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0
Read more
  • 0
  • 0
  • 16937

article-image-aclu-files-lawsuit-against-11-federal-criminal-and-immigration-enforcement-agencies-for-disclosure-of-information-on-government-hacking
Melisha Dsouza
24 Dec 2018
3 min read
Save for later

ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking

Melisha Dsouza
24 Dec 2018
3 min read
On Friday, The American Civil Liberties Union (ACLU), Privacy International, and the University at Buffalo Law School’s Civil Liberties & Transparency Clinic filed a Freedom of Information Act lawsuit against 11 federal criminal and immigration enforcement agencies, including the FBI, Immigration and Customs Enforcement, and the Drug Enforcement Administration. This lawsuit demands disclosure of basic information about government hacking. They have demanded that the agencies disclose which hacking tools and methods they use, how often these tools are used, the legal basis for employing these methods, and any internal rules that govern them. They also seek any internal audits or investigations related to their use. The ACLU, in their blog post, state that the hacking by the government raises “grave privacy concerns”, creating “surveillance possibilities” that could pose a security risk because even “lawful hacking” can take advantage of unpatched vulnerabilities in a users devices and software. They believe that by hacking into a phone, laptop, or another device, federal agents can obtain any sensitive/confidential information. They can perform activities like activating a device’s camera and microphone, log keystrokes, or hijack a device’s functions. Most of the time users are completely unaware that they are being surveilled and there is not much information on what comprises a ‘lawful hacking’. ACLU argues that "Law enforcement use of hacking presents a unique threat to individual privacy." They have supported this claim by giving examples of a case in which the government commandeered an internet hosting service in order to set up a “watering hole” attack that is suspected to have spread malware to many innocent people that visited websites on the server. In another case, an FBI agent, posing as a reporter, investigating fake bomb threats impersonated an Associated Press reporter to deploy malware on a suspect’s computer. The agent created a fake story and sent a link to the story to a high school student. When the student visited the website, it implanted malware on his computer in order to report back identifying information to the FBI. To get a better understanding of what the government is doing, along with what rules it follows; the lawsuit will clarify whether and when the government should engage in hacking. It will also help users understand whether the government is collecting excessive information about the people it surveils, and how investigators handle innocent bystanders’ information. You can head over to ACLU’s official blog to know more about this news. IBM faces age discrimination lawsuit after laying off thousands of older workers, Bloomberg reports Microsoft calls on governments to regulate Facial recognition tech now, before it is too late British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook
Read more
  • 0
  • 0
  • 11363

article-image-qt-for-python-5-12-released-with-pyside2-qt-gui-and-more
Amrata Joshi
24 Dec 2018
4 min read
Save for later

Qt for Python 5.12 released with PySide2, Qt GUI and more

Amrata Joshi
24 Dec 2018
4 min read
Last week, Qt introduced Qt for Python 5.12, an official set of Python bindings for Qt, used for simplifying the creation of innovative and immersive user interfaces for Python applications. With Qt for Python 5.12, it is possible to quickly visualize the massive amounts of data tied to their Python development projects. https://twitter.com/qtproject/status/1076003585979232256 Qt for Python 5.12 comes with a cross-platform environment for all development needs. Qt’s user interface development framework features APIs and expansive graphics libraries. Qt for Python 5.12 provides the developers with a user-friendly platform. It is fully supported by the Qt Professional Services team of development experts and practitioners, as well as Qt’s global community. Lars Knoll, CTO of Qt, said, “Considering the huge data sets that Python developers work with on a daily basis, Qt’s graphical capabilities makes it a perfect fit for the creation of immersive Python user interfaces. With Qt for Python 5.12, our customers can build those user interfaces faster and more easily than ever before – with the knowledge that they are backed by a global team of Qt and user interface experts.” Features of Qt for Python 5.12 PySide2 Qt comes with a C++ framework, combined with the PySide2 Python module that offers a comprehensive set of bindings between Python and Qt Qt GUI Creation Qt Graphical User Interface (GUI) creation consists of the following functional modules: Qt Widgets: The Qt Widgets Module comes with a set of user interface elements for creating classic desktop-style user interfaces. Qt Quick: The Qt Quick module, a standard library for writing QML applications, contains Quick Controls for creating fluid user interfaces. Qt QML: The Qt QML module features a framework for developing applications and libraries with the QML language, a declarative language that allows user interfaces to be described in terms of their visual components. Environment familiarity: Qt for Python 5.12 comes with a familiar development environment for Python developers. PyPI: Python Package Index (PyPI) makes the installation process of Qt for Python 5.12 easy. VFX Reference Platform integration: Qt and Qt for Python 5.12 are integral parts of the VFX Reference Platform, a set of tool and library versions used for building software for the VFX industry. Qt 3D Animation: The Qt 3D animation module features a set of prebuilt elements to help developers get started with Qt 3D. Qt Sql: It provides a driver layer, SQL API layer, and a user interface layer for SQL databases. Qt for Python 5.12 is available under commercial licensing, as part of the products Qt for Application Development and Qt for Device Creation, and as open-source under the LGPLv3 license. Qt TextToSpeech: It provides an API for accessing text-to-speech engines. Easy and quick development Development with Qt for Python 5.12 is fun, fast and flexible. Developers can easily work on their applications using Qt for Python 5.12. Developers can power their UI development by utilizing ready-made widgets, controls, beautiful charts, and data visualizations and create stunning 2D/3D graphics for Python projects. Qt Community Developers can exchange ideas, learn, share, and connect with the Qt community. Global Qt Services Global Qt services provide tailored support at every stage of the product development lifecycle. What’s next in Qt for Python The team at Qt might simplify the deployment of PySide2 applications. They might also provide a smoother interaction with other Python modules and support other platforms like embedded and mobile. Users are excited about this project and are eagerly waiting for the stable release. Qt for Python will be helpful for developers as it makes the process of developing desktop apps easier. But few users still are with PyQt5 as the stable release for Qt for python hasn’t been rolled out yet. The switch from PyQt to PySide might be difficult for many. To know more about Qt for Python 5.12, check out Qt’s official website. Getting started with Qt Widgets in Android Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview Qt team releases Qt Creator 4.8.0 and Qt 5.12 LTS
Read more
  • 0
  • 0
  • 18366
article-image-our-healthcare-data-is-not-private-anymore-study-reveals-that-machine-learning-can-be-used-to-re-identify-individuals-from-physical-activity-data
Bhagyashree R
24 Dec 2018
3 min read
Save for later

Our healthcare data is not private anymore: Study reveals that machine learning can be used to re-identify individuals from physical activity data

Bhagyashree R
24 Dec 2018
3 min read
Last week, in a study published on JAMA Network Open, researchers revealed that machine learning algorithms trained with physical activity data collected from health tracking devices can be used to re-identify actual people. This study indicates that the current practices for anonymizing health information are not sufficient enough. Personal health and fitness data collected and stored by fitness wearable devices can be potentially sold to third parties, like employers, insurance providers, and other companies, without the users’ knowledge or consent. Also, health app makers might be able to link users name to their medical record and then sell this information to third-parties. Location information from activity trackers could be used to reveal sensitive military sites. Therefore, there is a need for a deidentification algorithm that aggregates the physical activity data of multiple individuals to ensure privacy for single individuals. For this study, the researchers analyzed the National Health and Nutrition Examination Survey (NHANES) 2003-2004 and 2005-2006 datasets. These datasets included recordings from physical activity monitors, during both a training run and an actual study mode, for 4,720 adults and 2,427 children. How does the reidentification procedure work? The machine learning model was constructed by building a separate multiclass classifier for each combination of demographic attributes. They used two different machine learning algorithms for multiclass classification, namely, linear support vector machine and random forests. The models were then tested by feeding in the demographic and physical activity data, but not the record numbers, from the testing data into the models to make predictions of record numbers. The accuracy of the models was calculated by counting how many predicted record numbers matched the actual record numbers in the testing data. The following block diagram depicts the steps of this procedure: Source: JAMA Network Open Results of this study The random forest algorithm was able to reidentify the demographic and physical activity data of 4478 adults (94.9%) and 2120 children (87.4%) in NHANES 2003-2004 and 4470 adults (93.8%) and 2172 children (85.5%) in NHANES 2005-2006. The linear SVM algorithm was able to reidentify the demographic and physical activity data of 4043 adults (85.6%) and 1695 children (69.8%) in NHANES 2003-2004 and 4041 adults (84.8%) and 1705 children (67.2%) in NHANES 2005-2006. How privacy risks can be reduced? Per the research paper, the privacy risks posed on individuals by sharing physical data can be reduced by sharing data not only in time but also across individuals of largely different demographics. This is particularly important for governmental organizations such as NHANES that publicly release large national health datasets. Also, currently we do not have strict regulations for organizations that collect and share these sensitive health data. Policymakers should develop regulations to minimize the sharing of activity by device manufacturers. You can go through the research paper for more details: Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning. Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference Researchers develop new brain-computer interface that lets paralyzed patients use tablets Facebook AI researchers investigate how AI agents can develop their own conceptual shared language
Read more
  • 0
  • 0
  • 12377

article-image-introducing-netcap-a-framework-for-secure-and-scalable-network-traffic-analysis
Amrata Joshi
24 Dec 2018
5 min read
Save for later

Introducing Netcap, a framework for secure and scalable network traffic analysis

Amrata Joshi
24 Dec 2018
5 min read
Last week, a new traffic analysis framework, Netcap (NETwork CAPture) was released. It converts a stream of network packets into accessible type-safe structured data for representing specific protocols or custom abstractions. https://twitter.com/dreadcode/status/1076267396577533952 This project was implemented in Go programming language that provides a garbage collected memory safe runtime as parsing of untrusted input could be dangerous. It was developed for a series of experiments like filtering, dataset labeling, encoding, error logging, etc in the thesis: Implementation and evaluation of secure and scalable anomaly-based network intrusion detection. The Netcap project won the second place at Kaspersky Labs SecurIT Cup 2018 in Budapest. Why was Netcap introduced? Corporate communication networks are attacked frequently with previously unseen malware or insider threats, which makes defense mechanisms such as anomaly-based intrusion detection systems necessary for detecting security incidents. The signature-based and anomaly detection strategies rely on features extracted from the network traffic that requires secure and extensible collection strategies. The solutions that are available are written in low-level system programming languages that require manual memory management and suffer from vulnerabilities that allow a remote attacker to disable the network monitor. Others lack in terms of flexibility and data availability. To tackle these problems and ease future experiments with anomaly-based detection techniques, Netcap was released. Netcap uses Google's protocol buffers for encoding its output which helps in accessing it across a wide range of programming languages. The output can also be emitted as comma separated values, which is a common input format for data analysis tools and systems. Netcap is extensible and it provides multiple ways of adding support for new protocols and also implements the parsing logic in a memory safe way. It provides high dimensional data of observed traffic and allows the researcher to focus on new approaches for detecting malicious behavior in network environments, instead of opting data collection mechanisms and post-processing steps. It features a concurrent design that makes use of multi-core architectures. This command-line tool focuses on usability and readability and displays progress when processing packets. Why Go? Go, commonly referred to as Golang, is a statically typed programming language which was released by Google in 2009. Netcap opted Go as its syntax is similar to the C programming language and also has a lot of adopted ideas from other languages, such as Python and Erlang. It is commonly used for network programming and backend implementation. With Go Netcap can compile faster and generate statically linked binaries, easily. Goroutine, an asynchronous process is multiplexed onto threads of the OS as required. In case a goroutine blocks, the corresponding OS thread blocks as well, but the other goroutines aren’t affected. So, this proves to be helpful in Netcap as it doesn’t disturb the functioning. Also, Goroutines are less expensive as compared to a thread and allocate resources dynamically as needed. Since, Go offers channels as a lightweight way to communicate between goroutines, the synchronization and messaging process gets easier in Netcap. Design Goals of Netcap Netcap provides memory safety when parsing untrusted input. It features ease of extension. The output format is interoperable with many different programming languages. It features concurrent design. It comes with output with small storage footprint on disk. It provides with maximum data availability. It allows implementation of custom abstractions It comes with a rich platform and architecture support Future Scope Future development on Netcap will focus on increasing the unit test coverage and performance critical operations. The output of Netcap will be compared to other tools, to ensure no data is missed or misinterpreted. Netcap will be extended in future with functionalities like support for extracted features. This framework might be used for experiments on datasets for accurate predictions on network data. Encoding feature vectors could also be implemented as part of the Netcap framework. An interface for adding additional application layer encoders can be added in future. Netcap will be evaluated for monitoring industrial control systems communication. The recently open sourced fingerprinting strategy for SSH handshakes (HASSH) by salesforce could prove beneficial in future. Check the slides of this project from the presentation by Philipp Mieden (the creator of Netcap) at the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities on Researchgate. Many users are appreciating the efforts taken for this project and eagerly awaiting for the features that might be released in the future. But a few Hacker News users think that the functionality provided by this application is still unclear. The thesis misses a lot of points with the major one being as to how this tool is actually warranted as a whole. The question is as to how will the anomalies of this project get detected? A lot of questions are still unanswered but it would be interesting to see what Philipp comes up with next. https://twitter.com/mythicalcmd/status/1076459582963310593 Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Netflix adopts Spring Boot as its core Java framework Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 3306

article-image-facebook-halted-its-project-common-ground-after-joel-kaplan-vp-public-policy-raised-concerns-over-potential-bias-allegations
Natasha Mathur
24 Dec 2018
3 min read
Save for later

Facebook halted its project ‘Common Ground’ after Joel Kaplan, VP, public policy, raised concerns over potential bias allegations

Natasha Mathur
24 Dec 2018
3 min read
Wall street journal published a report yesterday that states Facebook had put a halt on its project named “common ground”, late summer, this year, over the concerns that the project could lead to the accusations of political bias on the platform. Common Ground was developed with an aim to promote healthier political discussions for users with differing political beliefs.   It is reported that the Common Ground Project would have consisted of many different features aimed at reducing the toxic content on the platform and encouraging more positive content surrounding politics. These features included promoting news stories, status updates, and articles shared by people supporting opposite political beliefs. It would also remove any comments and discussions that would promote negativity or hate speech regarding politics. Facebook has already been taking measures to eradicate hate speech and misinformation on its platform as it published a “blueprint” last month, that talks about updating its news feed algorithm. Joel Kaplan, VP, Global public policy, Facebook, raised concerns Facebook had done its research and discussions regarding the project for well over a year before deciding to cancel it. The common ground project was terminated when Joel Kaplan, VP of global public policy at Facebook, raised issues regarding the project. Facebook, however, hasn’t commented anything about the ‘Common ground” project and Kaplan’s reported role in the decision to halt it. Kaplan’s complaint with the project was that, first, the name “Common Ground”, in itself sounds “patronizing”, and second, the project might lead to Facebook receiving criticism from conservative users. A spokeswoman for Facebook told WSJ that Facebook considers it absolutely "essential" to understand the diverse point of views when it comes to creating projects that are meant to "serve everyone”. Kaplan also believed that this attempt to remove polarization, might, in turn, affect the user engagement on Facebook. However, Kaplan was not the only one and Mark Zuckerberg, CEO, Facebook also echoed Kaplan’s beliefs. WSJ also states that Kaplan’s voice has become stronger since the US 2016 presidential elections with him having a say when it comes to making product related decisions at Facebook. The report states that although Kaplan is promoting anti-bias beliefs, he himself has been a part of recent controversies. For instance, Kaplan attended and sat in on the Congressional hearings for Brett Kavanaugh, a then-supreme court nominee, who was accused of sexual misconduct from multiple women. Kaplan's attendance at the hearing led to a wide outrage from the Facebook employees.  Another example presented in the report is Kaplan’s partnership with the “Daily Caller’s fact-checking entity” that ended in November when “the Daily Caller’s fact-checking operation lost its accreditation”, reports the WSJ. Nothing can be commented on whether Facebook’s decision to halt the project was a wise one or not, however, the fact that Facebook is taking initiatives towards promoting healthier conversations on its platform seems certainly credible. The story first appeared on WallStreet Journal. NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release Ex-Facebook manager says Facebook has a “black people problem” and suggests ways to improve UK parliament seizes Facebook internal documents cache after Zuckerberg’s continuous refusal to answer question
Read more
  • 0
  • 0
  • 9740
article-image-the-us-to-invest-over-1b-in-quantum-computing-president-trump-signs-a-law
Prasad Ramesh
24 Dec 2018
3 min read
Save for later

The US to invest over $1B in quantum computing, President Trump signs a law

Prasad Ramesh
24 Dec 2018
3 min read
US President Donald Trump signed a bill called the National Quantum Initiative Act. This is a nation-wide quantum computing plan will establish goals for the next decade to accelerate the development of quantum technology. What is the National Quantum Initiative Act about? The bill for quantum technologies was originally introduced in June this year. This bill is a commitment that various departments such as the NIST, NSF, and Secretary of Energy together will provide $1.25B in funding from 2019 to 2023 to promote activities in the quantum information science. The new act and the funding that comes with it will boost quantum research in the US. As stated in the Act: “The bill defines ‘quantum information science’ as the storage, transmission, manipulation, or measurement of information that is encoded in systems that can only be described by the laws of quantum physics.” The president signed the bill as a law last week on Friday. What will the National Quantum Initiative Act allow? This bill aims to further USA’s position in the area of quantum information science and its technology applications. The bill will support research and development of quantum technologies that can lead to practical applications. It seeks to: Expand the workforce on quantum computing Promote research opportunities across various academic levels Address any knowledge gaps dd more facilities and centers for testing and education in this field Promote rapid development of quantum-based technologies The bill also seeks to: Improve the collaboration between the Federal Government of USA, its laboratories and industries, universities Promote the development of international standards for quantum information science Facilitate technology innovation and private sector commercialization Meet economic and security goals of USA The US President will work with Federal agencies, working groups, councils, subcommittees, etc., to set goals for the National Quantum Initiative Act. What’s the fuss with quantum computing? As we mentioned is a previous post: “Quantum computing uses quantum mechanics in quantum computers to solve a diverse set of complex problems. It uses qubits to store information in parallel dimensions. Quantum computers can work through a solution involving large parameters with far fewer operations than a standard computer.” This does not mean that a quantum computer is necessarily faster than a classical computer, a quantum computer is just better at solving complex problems that a regular one will take way too long if at all it can solve such problems. Quantum computers have great potential to solve future problems, and is hence drawing attention from tech companies and governments. Like D-Wave launching a Quantum cloud service, UK researchers working on quantum entanglements, and Rigetti working on a 128 qubit chip. What are the people saying? As is the general observation around the motivation for quantum computing, this comment from Reddit puts it nicely: “Make no mistake, this is not only about advancing computing power, but this is also about maintaining cryptographic dominance. Quantum computers will be able to break a lot of today's encryption.” Another comment expresses: “Makes sense, Trump has a tendency to be in 2 different states simultaneously.” You can read the bill in its entirety on the Congress Government website. Quantum computing – Trick or treat? Rigetti Computing launches the first Quantum Cloud Services to bring quantum computing to businesses Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible
Read more
  • 0
  • 0
  • 18583

article-image-congress-passes-open-government-data-act-to-make-open-data-part-of-the-us-code
Melisha Dsouza
24 Dec 2018
3 min read
Save for later

Congress passes ‘OPEN Government Data Act’ to make open data part of the US Code

Melisha Dsouza
24 Dec 2018
3 min read
22nd December marked a win for U.S. government in terms of efficiency, accountability, and transparency of open data. Following the Senate vote held on 19th December, Congress passed the Foundations for Evidence-Based Policymaking (FEBP) Act (H.R. 4174, S. 2046). Title II of this package is the Open, Public, Electronic and Necessary (OPEN) Government Data Act, which requires all non-sensitive government data to be made available in open and machine-readable formats by default. The federal government possesses a huge amount of public data which should ideally be used to improve government services and promote private sector innovation. According to Data Coalition, "the open data proposal will mandate that federal agencies publish their information online, using machine-readable data formats". What does the bill mandate? There are a number of practical things the bill will do, which should have real benefits for both citizens and federal organizations: Makes Federal data more accessible to the public, and requires all agencies to publish an inventory of all their "data assets" Encourages government organizations to use data to make decisions Ensuring better data governance by requiring Chief Data Officers in Federal agencies After some minor corrections made on Saturday, December 22nd, the Senate passed the resolution required to send the bill onwards to the president’s desk. There are two things which were amended in this act before passing it on to the president: The text was amended so that it only applied to CFO Act agencies, not the Federal Reserve or smaller agencies. There was acarve-out “for data that does not concern monetary policy,” which relates to the Federal Reserve, among others. Why is the open data proposal required? For many years, businesses, journalists, academics, civil society groups, and even other government agencies have relied on data that the federal government makes freely available in open formats online. However, while many federal government agencies publish open data, there has never been a law mandating the federal government to do so. The data available in a machine-readable format and catalogued online will help individuals, organizations, and other government offices to use it while preserving privacy and national security concerns. Open data has been an effective platform for innovation in the public sectors supporting significant economic value while increasing transparency, efficiency, and accountability in government operations. It has worked towards powering new tools and services to address some of the country’s most pressing economic and social challenges. Michele Jolin, CEO and co-founder of Results for America, said in a statement. “We commend Speaker Ryan, Senator Murray and their bipartisan colleagues in both chambers for advancing legislation that will help build evidence about the federally-funded practices, policies and programs that deliver the best outcomes. By ensuring that each federal agency has an evaluation officer, an evaluation policy and evidence-building plans, we can maximize the impact of public investments.” U.S Citizens also called this bill a big ‘milestone’ in the history of the country and accepted the news with vigor. https://twitter.com/internetrebecca/status/1076226160751726592 https://twitter.com/Jay_Nath/status/1076884756426457088 You can read the entire backstory on what’s in the bill and how it was passed at E Pluribus Unum. Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act
Read more
  • 0
  • 0
  • 13413
Modal Close icon
Modal Close icon