Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-to-bring-focus-on-the-impact-of-tech-on-society-an-education-in-humanities-is-just-as-important-as-stem-for-budding-engineers-says-mozilla-co-founder
Natasha Mathur
15 Oct 2018
4 min read
Save for later

To bring focus on the impact of tech on society, an education in humanities is just as important as STEM for budding engineers, says Mozilla co-founder

Natasha Mathur
15 Oct 2018
4 min read
Mitchell Baker, chairwoman, and co-founder of Mozilla talked about the need for the tech industry to expand beyond the technical skills, last week following the announcement of the Responsible computer Science Challenge. She spoke about how hiring employees only from the STEM (science, technology, engineering, and maths) stream leads the way for technologists who face the same “blind spots” in tech as the current ones.   “STEM is a necessity and educating more people in STEM topics clearly critical. But one thing that’s happened in 2018 is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result,” said Baker in a statement to the Guardian. She also mentioned that hiring employees solely from the STEM disciplines is a move that will “come back to bite us”. Baker also tweeted about the reason to move beyond the precise technical jobs and skills: https://twitter.com/MitchellBaker/status/1050842658724184065 Mozilla wants to broaden the horizon of the tech industry by incorporating education grounded in humanities such as psychology and philosophy into undergraduate computer science degrees. The inclusion of ethics in the coursework will focus on not being purely philosophical. Rather, it will make use of hypothesis and logic to present the ideas. Also, these ethics ideas should make sense in a computer science coursework. “We need to be adding not just social sciences of the past, but something related to humanity and how to think about the effects of technology on humanity – which is partly sociology, partly anthropology, partly psychology, partly philosophy, partly ethics … it’s some new formulation of all of those things, as part of a Stem education. Otherwise, we’ll have ourselves to blame, for generations of technologists who don’t even have the toolsets to add these things in”, mentioned Baker. Mozilla Foundation, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, launched a competition, named Responsible Computer Science Challenge, last week for professors and educators. This aims to produce “a new wave of engineers” who’d implement a holistic approach to the design of all types of tech products.   “The hope is that the Challenge will unearth and spark innovative coursework that will not only be implemented at the participating home institutions but also be scaled to additional colleges and universities across the country — and beyond”, reads the challenge overview. The challenge stems from the ongoing problem of misinformation online and wants to empower graduating engineers to drive a “culture shift in the tech industry and build a healthier internet”. This initiative by Mozilla to promote ethics and humanities in computer science coursework reflects on the values that the company stands by. It was only last week when the company dropped the word “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion. “In a world where software is entwined with much of our lives, it is not enough to simply know what software can do. We must also know what software should and shouldn’t do, and train ourselves to think critically about how our code can be used. Students of computer science...must understand how code intersects with human behavior, privacy, safety, vulnerability, equality, and many other factors”, says Kathy Pham, a computer scientist at Mozilla who’s also co-leading the challenge. For more information, check out the official Mozilla blog. Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns Mozilla’s new Firefox DNS security updates spark privacy hue and cry Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature
Read more
  • 0
  • 0
  • 10166

article-image-gnu-guile-2-9-1-beta-released-jit-native-code-generation-to-speed-up-all-guile-programs
Prasad Ramesh
15 Oct 2018
2 min read
Save for later

GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs

Prasad Ramesh
15 Oct 2018
2 min read
GNU released Guile 2.9.1 beta of the extension language for the GNU project. It is the first pre-release leading up to the 3.0 release series. In comparison to the current stable series, 2.2.x, Guile 2.9.1 brings support for just-in-time native code generation to speed up all Guile programs. Just-in-time code generation in Guile 2.9 Relative to Guile 2.2, Guile programs now run up to 4 times faster. This is due to just-in-time (JIT) native code generation. JIT compilation is enabled automatically in this release. To disable it, configure Guile with either `--enable-jit=no' or `--disable-jit'. The default is `--enable-jit=auto', which enables the JIT. JIT support is limited to x86-64 platforms currently. Eventually, it will expand to all architectures supported by GNU Lightning. Users on other platforms can try passing `--enable-jit=yes' to see if JIT is available on their platform. Lower-level bytecode Relative to the virtual machine in Guile 2.2, Guile's VM instruction set is now more low-level.  This allows expressing advanced optimizations, like type check elision or integer devirtualization, and makes JIT code generation easier. This low-level change can mean that for a given function, the corresponding number of instructions in Guile 3.0 may be higher than Guile 2.2. This can lead to slowdowns when the function is interpreted. GOOPS classes are not redefinable by default All GOOPS classes were redefinable in theory if not practically. This was supported by an indirection (or dereference operator) in all "struct" instances. Even though only a subset of structs would need redefinition the indirection is removed to speed up Guile records. It also allows immutable Guile records to eventually be described by classes, and enables some optimizations in core GOOPS classes that shouldn't be redefined. In GOOPS, now there are classes that are both redefinable and not redefinable. The classes created with GOOPS by default are not redefinable. In order to make a class redefinable, it should be an instance of `<redefinable-class>'. Also, scm_t_uint8, etc are deprecated in favor of C99 stdint.h. This release does not offer any API or ABI stability guarantees. Stick to the stable 2.2 release if you want a stable working version. You can read more in the release notes on the GNU website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements GIMP gets $100K of the $400K donation made to GNOME Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 9743

article-image-employees-of-microsoft-ask-microsoft-not-to-bid-on-us-militarys-project-jedi-in-an-open-letter
Sugandha Lahoti
15 Oct 2018
4 min read
Save for later

'Employees of Microsoft' ask Microsoft not to bid on US Military’s Project JEDI in an open letter

Sugandha Lahoti
15 Oct 2018
4 min read
Last Tuesday, Microsoft announced plans to bid on the Joint Enterprise Defense Infrastructure (JEDI) contract, which is a $10 billion project to build cloud services for the Department of Defense. However, over this weekend, an account named ‘Employees of Microsoft’ on Medium has urged the company not to bid on the JEDI project in an open letter. They said, “The contract is massive in scope and shrouded in secrecy, which makes it nearly impossible to know what we as workers would be building.” At the time of writing, no further details about the ‘Employees of Microsoft’ Medium account apart from the fact that the open letter is their first post, have come to light. We are unaware of whether this account genuinely represents a section of employees at Microsoft and if they do, what number of employees have signed this open letter. No names have been attached to the open letter. Earlier this month, Google announced that they will not be competing for the Pentagon’s cloud-computing contract. They opted out of bidding for project JEDI saying the project may conflict with their principles for the ethical use of AI. In August, Oracle Corp filed a protest with the Government Accountability Office(GAO) against the JEDI cloud contract. Oracle believes that the contract should not be awarded only to a single company but instead, allow for multiple winners. DoD Chief Management Officer John H. Gibson II explained the program’s impact, saying, “We need to be very clear. This program is truly about increasing the lethality of our department.” Many Microsoft employees agree that what they build should not be used for waging war. Per the letter, “When we decided to work at Microsoft, we were doing so in the hopes of empowering every person on the planet to achieve more, not with the intent of ending lives and enhancing lethality.” They also alleged that with JEDI, Microsoft executives are on track to betray the principles of “reliability and safety, privacy and security, inclusiveness, transparency, and accountability” in exchange for short-term profits. What do Microsoft employees want? Microsoft employees have asked strong questions such as, “what are Microsoft’s A.I. Principles, especially regarding the violent application of powerful A.I. technology? How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?” They want clear ethical guidelines and meaningful accountability on which uses of technology are acceptable, and which are off the table. They also want the cloud and edge solutions listed on Azure’s blog to be reviewed by Microsoft’s A.I. ethics committee, Aether. Not just that, the petitioners have also urged employees of other tech companies to take similar actions asking questions like, “how your work will be used, where it will be applied, and then act according to your principles.” Many employees within Microsoft have also voiced ethical concerns regarding the company’s ongoing contract with Immigration and Customs Enforcement (ICE). Per this contract, Microsoft provides Azure cloud computing services that have enabled ICE to enact violence and terror on families at the border and within the United States. “Despite our objections, the contract remains in place. Microsoft’s decision to pursue JEDI reiterates the need for clear ethical guidelines, accountability, transparency, and oversight.” Read the entire open letter on Medium. Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles. Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract. Google takes steps toward better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices.
Read more
  • 0
  • 0
  • 13406

article-image-jeff-weiner-talks-about-technology-implications-on-society-unconscious-bias-and-skill-gaps-wired-25
Sugandha Lahoti
15 Oct 2018
4 min read
Save for later

Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25

Sugandha Lahoti
15 Oct 2018
4 min read
Last Saturday, Wired interviewed Jeff Weiner, CEO Linkedin, as a part of their 25th Anniversary celebration. He talked about the implications of technology on the modern society saying that technology amplifies tribalism. He also talked about how Linkedin keeps a tab on unconscious bias and why Americans need to develop soft skills to succeed in the coming years. Technology accentuates tribalism When asked about the implications of technology on society, Weiner said, “I think increasingly, we need to proactively ask ourselves far more difficult, challenging questions—provocative questions—about the potential unintended consequences of these technologies. And to the best of our ability, try to understand the implications for society.” This statement is justified as every week there's a top story about some company going wrong in some direction. We’re talking about the shutting down of Google+, Facebook’s security breach compromising 50M accounts, etc. He further talked about technology dramatically accelerating and reinforcing tribalism at a time when increasingly we need to be coming together as a society. He says, that one of the most important challenges for tech in the next 25 years is to  “understand the impact of technology as proactively as possible. And trying to create as much value, and trying to bring people together to the best of our ability.” Unconscious bias on Linkedin He also talked about unconscious bias as an unintended consequence of LinkedIn’s algorithms and initiatives. “It shouldn't happen that Linkedin reinforces the growing socioeconomic chasms on a global basis, especially here in the United States, by providing more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks.” Read more: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 He elaborated on how LinkedIn is addressing this unconscious bias. Linkedin’s Career Advice Hub was developed with the goal of creating economic opportunity for every member of the global workforce, last year as a response to the unconscious bias that crept into its ‘Ask For a Referral’ program. The Career Advice Hub enables any member of LinkedIn to ask for help, and for any member of LinkedIn to volunteer to help them, and to mentor them. They are also going to create economic opportunities for frontline workers, middle-skilled workers, and blue collar workers. Another focus is on knowledge workers, “who don't necessarily have the right networks or the right degrees.” Soft skills: The biggest skill gap in the U.S. Jeff also said that the biggest skills gap the United States is not coding skill but soft skills. This includes written communication, oral communication, team building, people leadership, collaboration. “For jobs like sales, sales development, business development, customer service, this is the biggest gap, and it's counter-intuitive.” Read more: 96% of developers believe developing soft skills is important Soft skills every data scientist should teach their child Soft skills are necessary because AI is still away from being able to replicate and replace human interaction and human touch. “So there's an incentive for people to develop these skills because those jobs are going to be more stable for a longer period of time.” Before you start thinking about becoming an AI scientist, you need to know how to send email, how to work a spreadsheet, how to do word processing. Jeff says, “Believe it or not, there are broad swaths of the population and the workforce that don't have those skills. And it turns out if you don't have these foundational skills if you're in a position where you need to re-skill for a more advanced technology, it becomes almost prohibitively complex to learn multiple skills at the same time.” Read the full interview on Wired. The ethical dilemmas developers working on Artificial Intelligence products must consider Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”
Read more
  • 0
  • 0
  • 13873

article-image-cimple-a-dsl-to-utilize-cpu-time-from-tens-to-hundreds-of-nanoseconds
Prasad Ramesh
15 Oct 2018
3 min read
Save for later

Cimple: A DSL to utilize CPU time from tens to hundreds of nanoseconds

Prasad Ramesh
15 Oct 2018
3 min read
Three MIT students and an associate professor published a paper in July. They introduce a concept called Instruction and Memory Level Parallelism (IMLP) task programming model for computing. They achieve this via a domain specific language (DSL) called Cimple (Coroutines for Instruction and Memory Parallel Language Extensions). Why Cimple? Before looking at what it is, let’s understand the motivation behind this work. As cited in the paper, currently there is a critical gap between millisecond and nanosecond latencies for process loading and execution. The existing software and hardware techniques hide the low latencies and are inadequate to fully utilize all of the memory from CPU caches to RAM. The work is based on a belief that an efficient, flexible, and expressive programming model can scale all of the memory hierarchy from tens to hundreds of nanoseconds. Modern processors with dynamic execution are more capable of exploiting instruction level parallelism (ILP) and memory level parallelism (MLP). They do this by using wide superscalar pipelines and vector execution units, and deep buffers for inflight memory requests. However, these resources “often exhibit poor utilization rates on workloads with large working sets”. With IMLP, the tasks execute as coroutines. These coroutines yield execution at annotated long-latency operations; for example, memory accesses, divisions, or unpredictable branches. The IMLP tasks are interleaved on a single process thread. They also integrate well with thread parallelism and vectorization. This led to a DSL embedded in C++ called Cimple. What is Cimple? It is a DSL embedded in C++, that allows exploring task scheduling and transformations which include buffering, vectorization, pipelining, and prefetching. A simple IMLP programming model is introduced. It is based on concurrent tasks being executed as coroutines. Cimple separates the program logic from programmer hints and scheduling optimizations. It allows exploring task scheduling and techniques like buffering, vectorization, pipelining, and prefetching. A compiler for CIMPLE automatically generates coroutines for the code. The CIMPLE compiler and runtime library are used via an embedded DSL. It separates the basic logic from scheduling hints and then into guide transformations. They also build an Abstract Syntax Tree (AST) directly from succinct C++ code. The DSL treats expressions as opaque AST blocks. It exposes conventional control flow primitives in order to enable the transformations. The results after using Cimple Cimple is used as a template library generator and then the performance gains are reported. The peak system throughput increased from 1.3× on HashTable to 2.5× on SkipList iteration. It speedups of the time to complete a batch of queries on one thread range from 1.2× on HashTable to 6.4× on BinaryTree. Source: Cimple: Instruction and Memory Level Parallelism Where the abbreviations are Binary Search (BS), Binary Tree (BT), Skip List (SL), Skip List iterator (SLi), and Hash Table (HT). Cimple reaches 2.5× throughput gains over hardware multithreading on a multi-core processor and 6.4× single thread. This is the resulting graph. Source: Cimple: Instruction and Memory Level Parallelism The final conclusions from the work is that Cimple is fast, maintainable, and portable. The paper will appear in PACT’18 to be held 1st to 4th November 2018. You can read it on the arXiv website. KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Facebook releases Skiplang, a general purpose programming language low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2731

article-image-facebook-says-only-29-million-and-not-50-million-users-were-affected-by-last-months-security-breach
Savia Lobo
15 Oct 2018
3 min read
Save for later

Facebook says only 29 million and not 50 million users were affected by last month’s security breach

Savia Lobo
15 Oct 2018
3 min read
Last month, Facebook witnessed its largest security breach which compromised 50 million user accounts, which was later fixed by its investigation team to avoid further misuse. On Friday, 12th October, Guy Rosen, VP of Product Management in Facebook, shared details of the attack for the users to know the actual reason behind the attack. A snapshot of the attack Facebook discovered the issue on September 25th where the attackers exploited a vulnerability in Facebook’s code that existed between July 2017 and September 2018. The attackers exploited a series of interactions of three distinct software bugs, which affected the ‘View As’ feature that lets people see what their own profile looks like to someone else. Attackers stole FB access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. Read Also : Facebook’s largest security breach in its history leaves 50M user accounts compromised Deciphering the attack : 29 million users were affected, not 50 million Guy Rosen, in his update stated, “We now know that fewer people were impacted than we originally thought. Of the 50 million people whose access tokens we believed were affected, about 30 million actually had their tokens stolen.” Here’s what happened The attackers already had control over a set of accounts connected to Facebook users. They further used an automatic technique to move from one account to the other in order to steal the access tokens of those friends, friends of friends, and so on. This allowed them to reach about 400,000 users. Guy writes, “this technique automatically loaded those accounts’ Facebook profiles, mirroring what these 400,000 people would have seen when looking at their own profiles. That includes posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations”. The attackers used these 400,000 people’s lists of friends to further steal access tokens for about 30 million people. They broke down these 30 million into three batches; namely 15, 14 and 1 million, and carried out different accessing techniques for the first two batches. For the 1 million people, the attackers did not access any information. For 15 million people, attackers accessed just the name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers not only accessed name and contact details, but also other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. Facebook will be sending customized messages to the 30 million affected people to explain to them the information the attacker might have accessed and how they can protect themselves from the after effects (getting suspicious calls, mails and messages). Guy also clarified, “This attack did not include Messenger, Messenger Kids, Instagram, WhatsApp, Oculus, Workplace, Pages, payments, third-party apps, or advertising or developer accounts.” Meanwhile, Facebook is co-operating with FBI, the US Federal Trade Commission, Irish Data Protection Commission, and other authorities to look for ways  attackers used Facebook and other possibilities of smaller-scale attacks. To know more about this in detail, visit Guy Rosen official blog post. Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind” Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO
Read more
  • 0
  • 0
  • 13478
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nginx-hybrid-application-delivery-controller-platform-improves-api-management-manages-microservices-and-much-more
Melisha Dsouza
15 Oct 2018
3 min read
Save for later

NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!

Melisha Dsouza
15 Oct 2018
3 min read
“Technology is front and center in every business strategy, and enterprises of all sizes and in all industries must embrace digital to attract, retain, and enrich customers,” -Gus Robertson, CEO, NGINX At the NGINX Conf 2018, the NGINX team has announced enhancements to its Application Platform that will serve as a common framework across monolithic and microservices based applications. The upgrade comes with 3 new releases; NGINX Plus, NGINX Controller, and NGINX Unit, which have been engineered to provide a built-in service mesh for managing microservices and an integrated application programming interface (API) management platform. They also maintain the traditional load balancing capabilities and a web application firewall (WAF). An application delivery controller (ADC) is used to improve the performance of web applications. The ADC acts as a mediator between web and application servers and their clients. It transfers requests and responses between them while enhancing performance using processes like load balancing, caching, compression, and offloading of SSL processing. The main aim of re-architecting NGINX’s platform and launching new updates was to provide a more comprehensive approach to integrating load balancing, service mesh technologies, and API management. This was to be done leveraging the modular architecture of the NGINX controller. Here is a gist of the three new NGINX product releases: #1 NGINX Controller 2.0 This controller is an upgrade on the NGINX Controller 1.0 that was launched in June of 2018. It was introduced with centralized management, monitoring, and analytics for NGINX Plus load balancers. Now, NGINX Controller 2.0 brings advanced NGINX Plus configuration. This includes version control, diffing, reverting and many more features. It also includes an all-newAPI Management Module which manages the NGINX Plus as an API gateway. Besides this, the controller will also include a future Service Mesh Module. #2 NGINX Plus R16 The R16 comes with dynamic clustering. It has a clustered state sharing and key-value stores for global rate limiting and DDoS mitigation. It also comes with load balancing algorithms for Kubernetes and microservices, enhanced UDP for VoIP and VDI, and AWS PrivateLink integration. #3 NGINX Unit 1.4 This unit improves security and language support while providing support for TLS. It also adds JavaScript with Node.js to extend existing Go, Perl, PHP, Python, and Ruby language support. Enterprises can now use the NGINX Application Platform to function as a Dynamic Application Gateway and a Dynamic Application Infrastructure. NGINX Plus and NGINX are used by popular, high-traffic sites such as Dropbox, Netflix, and Zynga. More than 319 million websites worldwide rely on NGINX Plus and NGINX application delivery platforms. To know more about this announcement, head over to DevOps.com Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Getting started with F# for .Net Core application development [Tutorial]  
Read more
  • 0
  • 0
  • 13777

article-image-google-open-sources-active-question-answering-activeqa-a-reinforcement-learning-based-qa-system
Natasha Mathur
15 Oct 2018
3 min read
Save for later

Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system

Natasha Mathur
15 Oct 2018
3 min read
Google announced last week, that it’s open-sourcing Active Question Answering (ActiveQA), a research project that involves training artificial agents for question answering using reinforcement learning. As this research project is now open source, Google has released a TensorFlow package for ActiveQA system. The latest TensorFlow ActiveQA package comprises three main components along with the code necessary to train and run the ActiveQA agent. First component is a pre-trained sequence to sequence model which takes a question as an input and returns its reformulations. Second component is an answer selection model that uses a convolutional neural network and gives a score to each triplet of the original question, reformulation, and answer. The selector makes use of the pre-trained, and publicly available word embeddings (GloVe). Third component is a question answering system (the environment) that uses BiDAF, a popular question answering system.The TensorFlow package also consists of all the code that is necessary to train and run the ActiveQA agent. “ActiveQA system.. learns to ask questions that lead to good answers. However, because training data in the form of question pairs, with an original question and a more successful variant, is not readily available, ActiveQA uses reinforcement learning, an approach to machine learning concerned with training agents so that they take actions that maximize a reward, while interacting with an environment”, reads the Google AI blog. This concept of ActiveQA was first Introduced in Google’s ICLR 2018 paper “Ask the Right Questions: Active Question Reformulation with Reinforcement Learning”. ActiveQA is far different in its approach than the traditional QA systems.Traditional QA systems make use of supervised learning techniques that are used along with labeled data to train a system. This system is capable of answering the arbitrary input questions, however,  it doesn’t come with an ability to deal with uncertainty as humans would. For instance, It is not able to reformulate the questions, issue multiple searches, and evaluate the responses. This leads to poor quality answers. ActiveQA, on the other hand, comprises an agent that consults the QA system repeatedly. This agent reformulates the original question many times which helps it select the best answer. Each of the questions reformulated is evaluated on the basis of how good the corresponding answer to that question is. If the corresponding answer is good, then the learning algorithm adjusts the model’s parameters accordingly. So, the question reformulation that led to the right answer would more likely be generated again. The ActiveQA approach allows the agent to involve in a dynamic interaction with the QA system, which leads to better quality of the returned answers. ActiveQA As per an example mentioned by Google, if you consider a question “When was Tesla born?”. The agent will reformulate the question in two different ways. One of them being “When is Tesla’s birthday” and the other one as “Which year was Tesla born”. This will help it retrieve the answers to both of the questions from the QA system. Once the systems use all this information, it collectively returns the answer as “July 10, 1856”. ActiveQA “We envision that this research will help us design systems that provide better and more interpretable answers, and hope it will help others develop systems that can interact with the world using natural language”, mentions Google. For more information, read the official Google AI blog. Google, Harvard researchers build a deep learning model to forecast earthquake aftershocks location with over 80% accuracy Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 15614

article-image-infernojs-v6-0-0-a-react-like-library-for-building-high-performance-user-interfaces-is-now-out
Bhagyashree R
15 Oct 2018
3 min read
Save for later

InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out

Bhagyashree R
15 Oct 2018
3 min read
Yesterday, InfernoJS community announced the release of InfernoJS v6.0.0. This version comes with Fragments using which you can group a list of children without adding extra nodes to the DOM. Three new methods have been added: createRef, forwardRef, and rerender, along with few breaking changes. Added support for Fragments Support for Fragments, a new type of VNode, is added. It will enable you to group a list of children without adding extra nodes to the DOM. With Fragments, you can return an array from Component render creating an invisible layer which ties its content together but will not render any container to actual DOM. Fragments can be created using the following four ways: Native Inferno API: createFragment(children: any, childFlags: ChildFlags, key?: string | number | null) JSX: <> ... </>, <Fragment> .... </Fragment> or <Inferno.Fragment> ... </Inferno.Fragment> createElement API: createElement(Inferno.Fragment, {key: 'test'}, ...children) Hyperscript API: h(Inferno.Fragment, {key: 'test'}, children) createRef API Refs provide a way to access DOM nodes or React elements created in the render method. You can now create refs using createRef() and attach them to React elements via the ref attribute. This new method allows you to write nicer syntax and reduces code when no callback to DOM creation is needed. forwardRef API forwardRef API allows you to “forward” ref inside a functional Component. This new method will be useful if you want to create a reference to a specific element inside simple functional Components. Forwarding ref means, automatically passing a ref through a component to one of its children. rerender With the help of the rerender method, all the pending setState calls will be flushed and rendered immediately. You can use it in the cases when the render timing is important or to simplify tests. New lifecycle Old lifecycle methods, componentWillMount, componentWillReceiveProps, or componentWillUpdate, will not be called when the new lifecycle methods, getDerivedStateFromProps or getSnapshotBeforeUpdate are used. What are the breaking changes? Since not all applications need server-side rendering, hydrate is now a part of the inferno-hydrate package. Style properties now use hyphen. For example, backgroundColor => background-color. In order to support JSX Fragment syntax, babel-plugin-inferno now depends on babel v7. setState lifecycle is changed to have better compatibility with ReactJS. Also, componentDidUpdate will now be triggered later in the lifecycle chain, after refs have been created. String refs are completely removed. Instead, you can use callback refs, createRef API, or forward Refs. Read the release notes of InfernoJS on its GitHub repository. Node.js v10.12.0 (Current) released The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!
Read more
  • 0
  • 0
  • 15342

article-image-rawgit-the-project-that-made-sharing-and-testing-code-on-github-easy-is-shutting-down
Melisha Dsouza
15 Oct 2018
2 min read
Save for later

RawGit, the project that made sharing and testing code on GitHub easy, is shutting down!

Melisha Dsouza
15 Oct 2018
2 min read
On the 8th of October, the team over at RawGit announced that the GitHub CDN system is now in a sunset phase and will soon shut down. The project was started 5 years ago, with an intention to help users to quickly share example code or test pages in GitHub. Using RawGit one could skip all the trouble of setting up a static site to a GitHub pages branch if they temporarily needed to share their examples. It acted as a caching proxy that ensured minimal load was placed on GitHub while users obtained easy static file hosting from a GitHub repo. Why is RawGit shutting down? August 2018 saw the news of crypto miners exploiting RawGit to use files from the GitHub repository. Attackers obtain user resources through RawGit CDN abuse and make the system weak.  A user aliased as jdobt uploaded malicious files on GitHub, which were later cached using RawGit. Then using RawGit URLs, the hacker inserted the crypto jacking mining malware on sites such as WordPress and Drupal. This could be seen as a potential reason for shutting down RawGit, especially after Ryan Grove, its creator stated that  'RawGit has also become an attractive distribution mechanism for malware. Since I have almost no time to devote to fighting malware and abuse on RawGit (and since it would be no fun even if I did have the time), I feel the responsible thing to do is to shut it down. I would rather kill it than watch it be used to hurt people.' Ryan has further mentioned a few free services as an alternative to some of RawGit functionalities. These being: jsDelivr GitHub Pages CodeSandbox unpkg The GitHub repositories that have used RawGit to serve content within the last month will continue until at least October of 2019. URLs for previous repositories are no longer being served. To know more about this announcement, head over to RawGit’s official blog. 4 myths about Git and GitHub you should know about Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management 
Read more
  • 0
  • 0
  • 11446
article-image-is-att-trying-to-twist-data-privacy-legislation-to-its-own-favor
Amarabha Banerjee
15 Oct 2018
4 min read
Save for later

Is AT&amp;T trying to twist data privacy legislation to its own favor?

Amarabha Banerjee
15 Oct 2018
4 min read
On September 26th, U.S. Senator John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, summoned a hearing titled ‘Examining Safeguards for Consumer Data Privacy’. Executives from AT&T, Amazon, Google, Twitter, Apple, and Charter Communications provided their testimonies to the Committee. The hearing took place to: examine privacy policies of top technology and communications firms, review the current state of consumer data privacy, and offer members the opportunity to discuss possible approaches to safeguarding privacy more effectively. John Thune opened the meeting by saying, “This hearing will provide leading technology companies and internet service providers an opportunity to explain their approaches to privacy, how they plan to address new requirements from the European Union and California, and what Congress can do to promote clear privacy expectations without hurting innovation.” There is,however, one major problem with this approach. A hearing on consumer privacy barring any participation from the consumer side is like a meeting to discuss women safety and empowerment without any woman on the board. Why would the administration do such a thing? They might just be not ready to bring all the sides in one room. They have had a second set of hearings with privacy advocates last week. But will this really bring a change in perspective? And where are we headed?   AT&T and net neutrality One of the key issues at hand in this story is net neutrality.. For those that don’t know, this is the principle that Internet service providers should allow access to all content and applications regardless of the source, and shouldn’t be able to favor or block particular products or websites. This basically means a democratic internet. The recent law ending net neutrality across the majority of U.S. states was arguably pushed and supported by major ISPs and corporations. This makes the declaration by AT&T stating that they want to uphold user privacy rules seem farcical, like a statement made by a hunter who is about to pounce on its prey and luring them with fake consolations. As one of the leading telecom companies, AT&T has a significant stake in the online advertising and direct TV industry. The more they can track you online and record your habits, the better they can push ads and continue to milk user data without them being informed. That was their goal when they manipulated the modest FCC user data privacy guidelines last year for broadband providers before they could even take effect. Those rules largely just mandated that ISPs be transparent about what data is collected and who it's being sold to, while requiring opt in consent for particularly sensitive consumer data like your financial background. When the same company rallies for user data privacy rules and tries to burden the social media and search engine giants like Facebook, Google, Microsoft etc, then there’s a definite doubt about their actual intent. The actual reason might just be to weaken the power of major tech companies like Google, facebook and push their own agenda via their broadband network. Monopoly in any form is not an ideal scenario for users and customers. While Google and Facebook are vying for a monopoly over how users interact online everyday,  AT&T is playing a different game altogether, that of gaining control of the internet itself. Google, though, has plans of laying their own internet cable under sea - it’s going to be hard for AT&T to compete, as admirable as its ostensible hubris might be. Still, there is a decent chance that it might become a two horse race by the middle of the next decade. Of course, the ultimate impact of this sort of monopoly remains to be seen. For AT&T, the opportunity is there, even if it looks like a big challenge. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday The U.S. Justice Department sues to block the new California Net Neutrality law California’s tough net neutrality bill passes state assembly vote
Read more
  • 0
  • 0
  • 9838

article-image-privilege-escalation-entry-point-for-malware-via-program-errors
Savia Lobo
14 Oct 2018
2 min read
Save for later

Privilege escalation: Entry point for malware via program errors

Savia Lobo
14 Oct 2018
2 min read
Malware or a malicious software is designed to harm user’s computer systems in multiple ways. Over the years, hackers and attackers have implemented various methods to inject viruses, worms, Trojans, and spyware to collapse a computer system. To combat against the current age malware, you must know how a malware function and what techniques attackers use to launch a malware within a system. Some advanced malware techniques include: Privilege Escalation is how a malware attempts to increase its reach within the system. Persistence Methods keep malware in execution state for a longer time. Data Encoding basically explores ways to hide the intent of the malware. Covert launching techniques help in launching malware in the most stealthy manner. Out of the three, privilege escalation is a network intrusion method where malware can enter the system via programming errors or design flaws. With the help of these channels, the attacker can have a direct access to the network and its associated data and applications. Watch the video below by Munir Njenga to know all about privilege escalation and its types in depth using real world examples. https://www.youtube.com/watch?v=Qzlkw5sJUsw About Munir Njengar Munir is a technology enthusiast, cybersecurity consultant, and researcher. His skills and competencies stem from his active involvement in engagements that deliver advisory services such as network security reviews, security course development, training and capacity building, mobile and internet banking security reviews (BSS, MSC, HLR/AUC, IN, NGN, GGSN/SGSN), web applications, and network attack and penetration testing. To know more about privilege Escalation and to learn other malware analysis methods, check out our course titled ‘Advanced Malware Analysis’ to which this video belongs.
Read more
  • 0
  • 0
  • 13253

article-image-microsoft-announces-decentralized-identity-in-partnership-with-dif-and-w3c-credentials-community-group
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Microsoft published a white paper on Decentralized Identity (DID) solution. These identities are user-generated, self-owned, globally unique identifiers rooted in decentralized systems. Over the past 18 months, Microsoft has been working towards building a digital identity system using blockchain and other distributed ledger technologies. With these identities aims to enhance personal privacy, security, and control. Microsoft has been actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. They are working with these groups to identify and develop critical standards. Together they plan to establish a unified, interoperable ecosystem that developers and businesses can rely on to build more user-centric products, applications, and services. Why decentralized identity (DID) is needed? Nowadays, people use digital identity at work, at home, and across every app, service, and device. Access to these digital identities such as email addresses and social network IDs can be removed at any time by the email provider, social network provider, or other external parties. Users also give permissions to numerous apps and devices, which calls for a high degree of vigilance of tracking who has access to what information. This standards-based decentralized identity system empowers users and organizations to have greater control over their data. This system addresses the problem of users granting broad consent to countless apps and services. It provides them a secure encrypted digital hub where they can store their identity data and easily control access to it. What it means for users, developers, and organizations? Benefits for users It enables all users to own and control their identity Provides secure experiences that incorporate privacy by design Design user-centric apps and services Benefits for developers It allows developers to provide users personalized experiences while respecting their privacy Enables developers to participate in a new kind of marketplace, where creators and consumers exchange directly Benefits for organizations Organizations can deeply engage with users while minimizing privacy and security risks Provides a unified data protocol to organizations to transact with customers, partners, and suppliers Improves transparency and auditability of business operations To know more about decentralized identity, read the white paper published by Microsoft. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week
Read more
  • 0
  • 0
  • 22847
article-image-gnome-3-32-says-goodbye-to-application-menus
Bhagyashree R
12 Oct 2018
3 min read
Save for later

GNOME 3.32 says goodbye to application menus

Bhagyashree R
12 Oct 2018
3 min read
On Tuesday, Gnome announced that they are planning on retiring the app menus from its next release, which is GNOME 3.32. Application menus or app menus are the menus that you see in the GNOME 3 top bar, with the name and icon for the current app. Why application menus are being removed in GNOME? The following are the reasons GNOME is bidding adieu to the application menus: Poor user engagement: Since their introduction, application menus have been a source of usability issues. The app menus haven’t been performing well over the years, despite efforts to improve them. Users don’t really engage with them. Two different locations for menu items: Another reason for the application menus not doing well could be the split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other. Limited adoption by third-party applications: Application menus have seen limited adoption by third-party applications. They are often kept empty, other than the default quit item, and people have learned to ignore them. What guidelines developers must follow? All GNOME applications will have to move the items from its app menu to a menu inside the application window. Here are the guidelines that developers need to follow: Remove the app menu and move its menu items to the primary menu If required, split the primary menus into primary and secondary menus The about menu item should be renamed from "About" to "About application-name" Guidelines for the primary menu Primary menu is the menu you see in the header bar and has the icon with three stacked lines, also referred to as the hamburger menu. In addition to app menu items, primary menus can also contain other menu items. 2. Quit menu item is not required so it is recommended to remove it from all locations. 3. Move other app menu items to the bottom of the primary menu. 4. A typical arrangement of app menu items in a primary menu is a single group of items: Preferences Keyboard Shortcuts Help About application-name 5. Applications that use a menu bar should remove their app menu and move any items to the menu bar menus. If an application fails to remove the application menu by the release of GNOME 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. Read the full announcement on GNOME’s official website. Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more GIMP gets $100K of the $400K donation made to GNOME
Read more
  • 0
  • 0
  • 18056

article-image-meet-prescience-the-ai-that-can-help-anesthesiologists-take-critical-decisions-in-the-or
Prasad Ramesh
12 Oct 2018
4 min read
Save for later

Meet Prescience, the AI that can help anesthesiologists take critical decisions in the OR

Prasad Ramesh
12 Oct 2018
4 min read
Before and during surgery anesthesiologists need to keep track of the anesthesia administered and the patient’s vitals. Imbalance in the level of anesthesia can cause low oxygen levels in the blood known as hypoxemia. Currently, there is no system to predict when this could happen during surgery and the patient is at the mercy of an anesthesiologist’s experience and discretion. The machine learning system called ‘Prescience’ A team of researchers from the University of Washington have come up with a system to predict if a patient is at the risk of hypoxemia. This is done using patient data like age and body mass index. Data from 50,000 surgeries was collected to train the machine learning model. The team wanted the model to solve two different kinds of problems. To look at pre-surgery patient information and predict whether a patient would have hypoxemia after anesthesia is administered. To predict the occurrence of hypoxemia at any point during the surgery by using real-time data. While predicting, for the first problem, the BMI was a crucial factor while for the second, the oxygen levels. Then, Lee and Lundberg worked on a new approach to train Prescience in a way that it would generate understandable explanations behind its predictions. Testing the model Now it was time to test Prescience. Lee and Lundberg created a web interface. It ran the anesthesiologists through cases from surgeries in the dataset that were not used to train Prescience. For the real-time test, the researchers specifically chose cases that would be hard to predict. For example, when a patient’s blood oxygen level is stable for 10 minutes and then drops. It was noted that Prescience improved the ability of doctors to correctly predict a patient’s hypoxemia risk by 16 percent before a surgery. In real-time, during a surgery it was able to predict the risk by 12 percent. With the help of Prescience, the anesthesiologists were able to correctly distinguish between the two scenarios nearly 80 percent of the time both before and during surgery. Prescience is not ready to be used in real operations yet. Lee and Lundberg plan to continue working with anesthesiologists to improve Prescience. In addition to hypoxemia, the team hopes to predict low blood pressure and recommend appropriate treatment plans with Prescience in the future. This method ‘opens the AI black box’ Although they could have successfully built a model that could predict hypoxemia, the researchers also wanted to answer the question “Why?”. A change from the traditional black box AI models engineers and researchers are used to. Lee, author of the paper said: “Modern machine-learning methods often just spit out a prediction result. They don’t explain to you what patient features contributed to that prediction. Our new method opens this black box and actually enables us to understand why two different patients might develop hypoxemia. That’s the power.” Who are the team members? The research team consisted of four people, two from medicine and two from computer science. Bala Nair, research associate professor of anesthesiology and pain medicine at the UW School of Medicine, Su-In Lee, an associate professor in the UW’s Paul G. Allen School of Computer Science & Engineering, Monica Vavilala, professor of anesthesiology and pain medicine at the UW School of Medicine and Scott Lundberg, a doctoral student in the Allen School. The system is not meant to replace doctors. You can read the research paper at Nature science journal and the University of Washington website. Swarm AI that enables swarms of radiologists, outperforms specialists or AI alone in predicting Pneumonia How to predict viral content using random forest regression in Python [Tutorial] SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 2324
Modal Close icon
Modal Close icon