Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-samsung-opens-its-ai-based-bixby-voice-assistant-to-third-party-developers
Melisha Dsouza
08 Nov 2018
3 min read
Save for later

Samsung opens its AI based Bixby voice assistant to third-party developers

Melisha Dsouza
08 Nov 2018
3 min read
“Our goal is to offer developers a robust, scalable and open AI platform that makes it easy for them to launch and evolve the amazing experiences they create for our users,” -Kyunghak Hyun, Product Manager of the AI Product Management Group at Samsung. At Samsung Developer Conference 2018, the company announced the opening of Bixby Developer Studio, an Integrated Development Environment (IDE), to developers. This will allow third party developers to build functionalities for the Artificial Intelligence (AI) assistant. Viv Labs CEO/ Siri co-founder Dag Kittlaus told the crowd that their dev tools are “way ahead of the other guys”.  The company will also be introducing Bixby Marketplace, for users to understand the new functionality of their voice assistant. This will even help developers make money using the functionalities of this intelligent companion. Bixby, that started as a practical way to use voice to interact with the phone, will now evolve into a scalable, open AI platform to support watches, refrigerators, tablets, washing machine, and many more devices. Developers will gain access to the same development tools that Samsung’s internal developers use to create Bixby Capsules, which will be used by them to build to add features to Bixby. Just like Skills on Google Alexa, developers can create custom Bixby interactions that can be added to various devices in the future. Samsung said the move was in line to support the company’s goal of building a scalable, open Artificial Intelligence (AI) platform where developers and service providers can access tools to bring Bixby to more people and devices around the world. As another initiative to scale Bixby services, Samsung is planning to expand support to five new languages- including British English, French, German, Italian and Spanish - in the coming months. This will especially be crucial to source Bixby-enabled devices like the Galaxy Home and smart fridges all around the globe. Samsung also demonstrated the capabilities of Bixby at the conference. The demo included Bixby helping a user in booking a hotel by opening various portals used in the process. This move - along with expanding Bixby to more devices and expanding support for more languages - is seen as Samsungs effort to increase Bixby's recognition around the globe. You can read more about this news at Techcrunch. Cisco Spark Assistant: World’s first AI voice assistant for meetings Voice, natural language, and conversations: Are they the next web UI? 12 ubiquitous artificial intelligence powered apps that are changing lives
Read more
  • 0
  • 0
  • 10955

article-image-phoenix-1-4-0-is-out-with-presence-javascript-api-http2-support-and-more
Savia Lobo
08 Nov 2018
2 min read
Save for later

Phoenix 1.4.0 is out with ‘Presence javascript API', HTTP2 support, and more!

Savia Lobo
08 Nov 2018
2 min read
Yesterday, the Phoenix web framework announced the release of its latest version, Phoenix 1.4. This release includes new features such as an HTTP2 support, improved development experience with faster compile times, new error pages, and local SSL certificate generation. The community also shipped a new and improved Presence javascript API. Features in the Phoenix 1.4.0 phx_new archive via hex The mix phx.new archive can now be installed via hex, for a simpler, versioned installation experience. The existing Phoenix applications will continue to work on Elixir 1.4. However, the new phx.new archive requires Elixir 1.5+. Support HTTP2 by making a small change Thanks to the release of Cowboy 2, Phoenix 1.4 supports HTTP2 with a single line change to mix.exs. One needs to simply add {:plug_cowboy, "~> 2.0"} to their deps and Phoenix will run with the Cowboy 2 adapter. New phx.gen.cert to aid local SSL development Most browsers require connections over SSL for HTTP2 requests, failure of which can cause them to fallback to HTTP 1.1 requests. To aid local development over SSL, Phoenix now includes a new phx.gen.cert task which generates a self-signed certificate for HTTPS testing in development. Faster Development Compilation The new release has improved compilation speeds have improved due to the contributions to plug and compile-time changes. New Development 404 Page Phoenix’s 404 page (in development) now lists the available routes for the originating router, for example: A new UserSocket for connection info Access to more underlying transport information when using Phoenix channels has been a highly requested feature. The 1.4 release now provides a connect/3 UserSocket callback, which can provide connection information, such as the peer IP address, host information, and X-Headers of the HTTP request for WebSocket and Long-poll transports. New  ‘Presence JavaScript API’ A new, backward compatible Presence JavaScript API has been introduced to both resolve race conditions as well as simplify the usage. Previously, multiple channel callbacks against "presence_state” and "presence_diff" events were required on the client which dispatched to Presence.syncState and Presence.syncDiff functions. Now, the interface has been unified to a single onSync callback and the presence object tracks its own channel callbacks and state. To know more about Phoenix 1.4.0 visit its official website. Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles Web Framework Behavior Tuning Beating jQuery: Making a Web Framework Worth its Weight in Code  
Read more
  • 0
  • 0
  • 13396

article-image-facebook-general-matrix-multiplication-fbgemm-high-performance-kernel-library-open-sourced-to-run-deep-learning-models-efficiently
Melisha Dsouza
08 Nov 2018
3 min read
Save for later

Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently

Melisha Dsouza
08 Nov 2018
3 min read
Yesterday (on the 7th of November), Facebook open-sourced its high-performance kernel library FBGEMM: Facebook GEneral Matrix Multiplication. This library offers optimized on-CPU performance for reduced precision calculations used to accelerate deep learning models. The library has delivered 2x performance gains when deployed at Facebook (in comparison to their current production baseline). Users can deploy it using the Caffe2 front end, and it will soon be callable directly by PyTorch 1.0 Python front end. Features of FBGEMM 1. FBGEMM is optimized for server-side inference. It delivers accuracy and efficiency when performing quantized inference using contemporary deep learning frameworks. It is a low-precision, high-performance matrix-matrix multiplications and convolution library that enables large-scale production servers to run the most powerful deep learning models efficiently. The library exploits opportunities to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound pre- and post-GEMM operations. At Facebook, FBGEMM has benefited many AI services, increased the speed of English-to-Spanish translations by 1.3x, reduced DRAM bandwidth usage in their recommendation system used in feeds by 40%, and speed up character detection by 2.4x in Rosetta, the machine learning system for understanding text in images and videos. FBGEMM supplies modular building blocks to construct an overall GEMM pipeline needed by plugging and playing different front-end and back-end components. It combines small compute with bandwidth-bound operations and exploits cache locality by fusing post-GEMM operations with macro kernel while providing support for accuracy-loss-reducing operations. Why does GEMM matter? Floating point operations (FLOPs)  are mostly consumed by Fully connected (FC) operators in the deep learning models that are  deployed in Facebook’s data centers. These FC operators are just plain GEMM, which means that their overall efficiency directly depends on GEMM efficiency. 19% of these deep learning frameworks at Facebook implement convolution as im2col followed by GEMM. However, straightforward im2col adds overhead from the copy and replication of input data. To combat this, some deep learning libraries implement direct (im2col-free) convolution for improved efficiency. Facebook provides a way to fuse im2col with the main GEMM kernel to minimize im2col overhead. Facebook  says that recent industry and research works have indicated that inference using mixed-precision works well- without adversely affecting accuracy. FBGEMM uses this as an alternative strategy to improve inference performance with quantized models. Also, newer generations of GPUs, CPUs, and specialized tensor processors natively support lower-precision compute primitives, and hence the deep learning community is moving toward low-precision models. FBGEMM provides a way to perform efficient quantized inference on the current and upcoming generation of CPUs. Head over to Facebook’s official blog to understand more about this library and how it is implemented. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues
Read more
  • 0
  • 0
  • 14755

article-image-web-summit-2018-day-2-highlights
Melisha Dsouza
06 Nov 2018
6 min read
Save for later

Web Summit 2018: day 2 highlights

Melisha Dsouza
06 Nov 2018
6 min read
Web summit 2018 began on Monday November 5. This year more than 70,000 people have been joined by CEOs and founders of the world’s biggest companies and the most exciting new startups, as well as influential investors and leading journalists. The summit aims to tackle the big challenges facing tech industry today - and this year, issues aren't in short supply. Day 2 of this year's Web Summit saw a range of really interesting perspectives on everything including industry diversity, fake news, and mixed reality. With so many great conversations happening, it's hard to pick out highlights. However, we tried our best - take a look at what we think are the key things from day 2 in Lisbon. 5 highlights from day 2 of Web Summit 2018 Slack wants to grow its user base 6,000% Slack co-founder Cal Henderson spoke at Web Summit on Monday, telling an interesting story about how he set up Slack - he started out building an online gaming company, and ended up redefining professional communication. But the biggest news was the extent of Slack's ambition. Henderson revealed that Slack plans to increase its user base 6,000% - from 8 million users to an astonishing 500 million. One of the key challenges for Slack, he explained, is simply getting people to shift from old ways of working. "Email has been the primary mode of communication inside business for more than 30 years. Convincing people, or just really telling people that there’s a different way to work, I think, is the biggest challenge," he said. Magic Leap showcases 'spatial computing' with a new mixed reality product Brenda Freeman, Chief Marketing Officer at Magic Leap spent some time showing off Magic Leap One, and their new Project Create software. Despite the delays, it looks like they might be on the cusp of a breakthrough when it comes to mixed reality. "Four years ago, our equipment was large enough to fit in a refrigerator, it took a few years to perfect." - Brenda Freeman Project Create looks like a powerful complement to the Magic Leap One. It's described as a 'digital playground' that helps users to fully realise Magic Leap One's incredible capabilities. Freeman talked a lot about the concept of 'spatial computing' - Project Create brings this to life, and is perhaps an important stepping stone in embedding the technology in everyday life. Essentially, Magic Leap's technology brings virtual objects into the real world - these objects 'respond' to human actions, such as eye movements. Diversifying the workforce: a problem we still need to fix Diversity in tech has been an issue that has been particularly pertinent in 2018, thanks to a combination of the #metoo movement and wider concern about ethics in software engineering. However, while it might feel like things are progressing, the facts state otherwise: the number of female CEOs leading Fortune 500 companies has fallen by 25% this year. This was the context of the conversation between Wall Street Journal journalist Thorold Barker and Vera Jourovo (from the European Commission) and Gillian Tans, president of Booking.com. Tans suggested that we're allowing cultures to simply exist - it takes effort to effect change. A brighter future for Europe's tech scene? There's a reason there's no real equivalent to Silicon Valley in Europe - the money simply isn't there. And while that probably isn't going to change any time soon, a conversation between Par Jorgen Parson (Northzone), Reshma Sohoni (Seedcamp) and Harry Nelis (Acce) indicates that the outlook for the European tech scene isn't actually all doom and gloom. https://www.youtube.com/watch?v=4BS9N6Cv6qU They offered some really useful insights for European tech entrepreneurs, discussing the potential advantages of gaining investment from European investors. In particular, they stressed that local capital could be useful for very new startups, and that reaching out for help is essential. Tackling fake news through education Technology has undoubtedly been instrumental in getting us into our current 'fake news' predicament. This was the topic of conversation between Serbian Prime Minister Ana Brnabic, the Guardian's David Pemsel, and Mitchell Baker from Mozilla. For Brnabic, education is key in tackling the effects of fake news. She said "the best way to fight against false news is investing in education and that is why Serbia is investing in a change in the concept of education – we are teaching young people how to think and not what to think." Nico Rosberg talks decision making in a world that's moving quickly Former Formula One driver Nico Rosberg recently made a move into tech entrepreneurship. Asked about the similarities between formula one and business he said "as a Formula One driver... you're pushing the boundaries all the time. And the most important thing to help you make the right decision under pressure is all the preparation that goes into it beforehand... every little thing counts." https://www.youtube.com/watch?v=5gzg0uTIWIo Joining TripAdvisor CEO Stephen Kaufer on stage, Rosberg also talked about the importance of trust. "I've built up a trusted inner-circle for myself of people that are absolute experts in analyzing, in a way that I'll never be... they're the best at what they do." Rosberg, an investor in SpaceX also said he wasn't concerned with Elon Musk's increasingly erratic behavior. "He's always pushing the boundaries... what he has done for all of us and for our planet is so huge." How the auto industry can manage disruption Self-driving cars are perhaps one of the best examples of the tech industry disrupting not only an industry, but even our entire way of life. This was the theme of a talk asking Is the auto industry at a crossroads? Featuring Marek Reichman from Aston Martin Lagonda, Martin Hoffman from Volkswagen, and Carsten Breitfeld from BYTON, a number of issues emerged, with legislation and regulation standing out as a crucial unknown in the future of autonomous vehicles. "It will be driven by societies, not by our companies" said Breitfeld, highlighting that the issues raised by autonomous vehicles - its positive and negative impact - goes far beyond the small number of businesses currently at the forefront of innovation. Breitfeld also spoke about the future of vehicle automation in terms of platforms. "All the traditional companies eventually will build electric cars, that's not a problem," he said. This means that it simply doesn't make sense for entrepreneurs to get into the self-driving car business, but instead to think of themselves as platform businesses, building the services and software that will allow established companies to easily develop automated vehicles. https://www.youtube.com/watch?v=rAWC_SUw2WY  
Read more
  • 0
  • 0
  • 14106

article-image-china-telecom-misdirected-internet-traffic-says-oracle-report
Savia Lobo
06 Nov 2018
3 min read
Save for later

China Telecom misdirected internet traffic, says Oracle report

Savia Lobo
06 Nov 2018
3 min read
The Naval War College published a paper titled, “China’s Maxim – Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking” that contained a number of claims about purported efforts by the Chinese government to manipulate BGP routing in order to intercept internet traffic. Doug Madory, Director of Internet Analysis at Oracle's Internet Intelligence team, in his recent blog post addresses the paper’s claims. He said, “I don’t intend to address the paper’s claims around the motivations of these actions. However, there is truth to the assertion that China Telecom (whether intentionally or not) has misdirected internet traffic (including out of the United States) in recent years. I know because I expended a great deal of effort to stop it in 2017”. SK Broadband, formerly known as Hanaro, experienced a brief routing leak on 9 December 2015,  which lasted a little more than a minute. During the incident, SK’s ASN, AS9318, announced over 300 Verizon routes that were picked up by OpenDNS’s BGPstream service. This leak was announced exclusively through China Telecom (AS4134), one of SK Broadband’s transit providers. Just minutes after that, AS9318 began transiting the same routes from Verizon APAC (AS703) to China Telecom (AS4134). The China telecom in turn began announcing them to international carriers such as Telia (AS1299), Tata (AS6453), GTT (AS3257) and Vodafone (AS1273), which resulted in AS paths such as: … {1299, 6453, 3257, 1273} 4134 9318 703 Doug says, “Networks around the world who accepted these routes inadvertently sent traffic to Verizon APAC (AS703) through China Telecom (AS4134). Below is a traceroute mapping the path of internet traffic from London to address space belonging to the Australian government. Prior to this routing phenomenon, it never traversed China Telecom”. He added, “Over the course of several months last year, I alerted Verizon and other Tier 1 carriers of the situation and, ultimately, Telia and GTT (the biggest carriers of these routes) put filters in place to ensure they would no longer accept Verizon routes from China Telecom. That action reduced the footprint of these routes by 90% but couldn’t prevent them from reaching those who were peering directly with China Telecom”. Focus of the BGP hijack alerting The common focus of BGP hijack alerting is looking for unexpected origins or immediate upstreams for routed address space. But traffic misdirection can occur at other parts of the AS path. In this scenario, Verizon APAC (AS703) likely established a settlement-free peering relationship with SK Broadband (AS9318), unaware that AS9318 would then send Verizon’s routes exclusively on to China Telecom and who would in turn send them on to the global internet. Doug said, “We would classify this as a peer leak and the result was China Telecom’s network being inserted into the inbound path of traffic to Verizon. The problematic routing decisions were occurring multiple AS hops from the origin, beyond its immediate upstream. Thus, he adds that the routes accepted from one’s peers also need monitoring, which is a fairly rare practice. Blindly accepting routes from a peer enables the peer to insert itself into the path of your outbound traffic. To know more about this news in detail, read Doug Madory’s blog post. US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again Ex-Google CEO, Eric Schmidt, predicts an internet schism by 2028 Has the EU just ended the internet as we know it?
Read more
  • 0
  • 0
  • 9456

article-image-us-supreme-court-ends-the-net-neutrality-debate-by-rejecting-the-2015-net-neutrality-repeal-allowing-the-internet-to-be-free-and-open-again
Amrata Joshi
06 Nov 2018
4 min read
Save for later

US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again

Amrata Joshi
06 Nov 2018
4 min read
Yesterday, the United States Supreme Court refused the request by the telecommunications industry against net neutrality. This indicated a formal end to the legal fight over a 2016 lower court decision. This upheld the Obama-era, net neutrality rules which ensures a free and open internet. The 2015 Federal Communications Commission's (FCC) order to impose internet neutrality rules and strictly regulate broadband was already reversed by Trump's pick for FCC chairman, Ajit Pai. Trump's government had repealed the request with regards to net neutrality in 2017. But the justice’s action does not revoke the 2017 repeal of the policy. The rules supported by former US President Barack Obama, intended to safeguard equal access to content on the internet, were opposed by President Donald Trump. “According to the Supreme Court announcement, Justices Clarence Thomas, Samuel Alito, and Neil Gorsuch would grant the petitions, vacate the judgment of the United States Court of Appeals for the District of Columbia Circuit (which upheld the FCC's internet neutrality order), and remand to that court with instructions to dismiss the cases as moot.” Chief Justice John Roberts and Justice Brett Kavanaugh, a judge on the US Court of appeals for the District of Columbia Circuit, recused themselves from the case. In 2017, Brett Kavanaugh dissented from the ruling upholding net neutrality rules, arguing that the rules violate the First Amendment rights of Internet service providers by preventing them from "exercising editorial control" over Internet content. FCC’s thoughts on net neutrality FCC is defending its net neutrality repeal against a lawsuit filed by dozens of litigants, including 22 state attorneys general, consumer advocacy groups, and tech companies. California State Sen. Scott Wiener (D-San Francisco), author of the net neutrality law, supported California Attorney General Xavier Becerra's decision. Wiener said: Of course, I very much want to see California's net neutrality law go into effect immediately, in order to protect access to the Internet. Yet, I also understand and support the Attorney General's rationale for allowing the DC Circuit appeal to be resolved before we move forward to defend our net neutrality law in court. After the DC Circuit appeal is resolved, the litigation relating to California's internet neutrality law will then move forward. Even Ajit Pai, the FCC chairman appreciated the court’s statement. FCC Commissioner Jessica Rosenworcel, who backed the net neutrality order in 2015, said on Twitter that “the commission had actually petitioned the Supreme Court to erase history and wipe out an earlier court decision upholding open internet policies. But today the Supreme Court refused to do so.” The legal battle over net neutrality might still continue and could possibly reach the Supreme Court again in a separate case. Senior counsel John Bergmayer of consumer advocacy group Public Knowledge said, “The Supreme Court decision is a good news for supporters of internet neutrality because it means that the DC Circuit court's previous decision upholding both the FCC's classification of broadband as a telecommunications service, and its rules prohibiting broadband providers from blocking or degrading Internet content, remains in place. Much of the current FCC’s argument against net neutrality depends on ignoring or contradicting the DC Circuit’s earlier findings, but now that these are firmly established as binding law, the Pai FCC’s case is on even weaker ground than before." The new FCC rules that went into effect in June, gave internet service providers greater power to regulate the content that customers access. Though they are now the subject of a separate legal fight after being challenged by many groups that backed net neutrality. The net neutrality repeal turned out to be good for providers like Comcast Corp, AT&T Inc and Verizon Communications Inc. It was opposed by internet companies like Amazon.com Inc, Facebook Inc, and Alphabet Inc as the repeal could lead to higher costs. Read more about this news on arstechnica. Read more on the court’s announcement, check on the supreme court’s official website. The U.S. Justice Department sues to block the new California Net Neutrality law California’s tough net neutrality bill passes state assembly vote Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study
Read more
  • 0
  • 0
  • 10118
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-redbird-a-modern-reverse-proxy-for-node
Amrata Joshi
06 Nov 2018
3 min read
Save for later

Redbird, a modern reverse proxy for node

Amrata Joshi
06 Nov 2018
3 min read
The latest version, 8.0 of Redbird got released last month. It is a modern reverse proxy for node. Redbird comes with built in Cluster, HTTP2, LetsEncrypt and Docker support which helps in the handling of load balancing, dynamic virtual hosts, proxying web sockets and SSL encryption. It comes with a complete library for building dynamic reverse proxies with the speed and robustness of http-proxy. It is a light-weight package that includes everything that is needed for easy reverse routing of applications. It is useful for routing applications from different domains in one single host. It is also used for easy handling of SSL. What’s new in Redbird? Support for HTTP2: One can now enable HTTP2 easily by setting the HTTP2 flag to true. Note: HTTP2 requires SSL/TLS certificates. Support for LetsEncrypt: Redbird now supports automatic generation of SSL certificates using LetsEncrypt. While using LetsEncrypt, the obtained certificates will be copied to the specific path on disk. One should take the backup, or save them. Features  It provides flexible and easy routing It also supports websockets The users can experience seamless SSL Support. It also, automatically redirects the user from HTTP to HTTPS It enables automatic TLS certificates generation and renewal It supports load balancing after following a round-robin algorithm It helps in registering and unregistering routes programmatically without restart which allows zero downtime deployments It helps in the automatic registration of running containers by enabling docker support. It enables automatic multi-process with the help of cluster support It is based on top of rock-solid node-http-proxy. It also offers optional logging which is based on bunyan It uses node-etcd to create proxy records automatically from an etcd cluster. Cluster Support in Redbird Redbird supports automatic generation of node cluster. To use the cluster support feature one needs to specify the number of processes that one wants it to use. Redbird automatically restarts any thread that crashes and hence increases reliability. If one needs NTLM support, Redbird adds the required header handler. This then registers a response handler. This handler makes sure that the NTLM auth header is properly split into two entries from http-proxy. Custom resolvers in Redbird Redbird comes with custom resolvers that helps one to decide how the proxy server handles the request. Custom resolvers help in path-based routing, headers based routing and wildcard domain routing. The install command for Redbird is npm install redbird. To read more about this news, check out the official page of Github. Squid Proxy Server: debugging problems How to Configure Squid Proxy Server Squid Proxy Server: Fine Tuning to Achieve Better Performance  
Read more
  • 0
  • 0
  • 15459

article-image-qubes-os-4-0-1-rc1-has-been-released
Savia Lobo
06 Nov 2018
2 min read
Save for later

Qubes OS 4.0.1-rc1 has been released!

Savia Lobo
06 Nov 2018
2 min read
Yesterday, the Qubes OS community announced the first release candidate of Qubes OS 4.0.1. This is the first of at least two planned point releases for version 4.0. Qubes OS, a free and open source security-oriented operating system, aims to provide security through isolation. Virtualization in Qubes OS is performed by Xen; the user environments can be based on Fedora, Debian, Whonix, and Microsoft Windows. The community announced the release of 3.2.1-rc1 one month ago. Since no serious problems have been discovered in 3.2.1-rc1, they plan to build the final version of Qubes 3.2.1 at the end of this week. Features of Qubes OS 4.0.1-rc1 All 4.0 dom0 updates to date Includes Fedora 29 TemplateVM Debian 9 TemplateVM Whonix 14 Gateway and Workstation TemplateVMs Linux kernel 4.14 The next release candidate The second release candidate, 4.0.1-rc2, will include a fix for the Nautilus bug reported in #4460 along with any other available fixes for bugs reported against this release candidate. To know more about Qubes OS 4.0.1-rc1 visit its official release document. QubesOS’ founder and endpoint security expert, Joanna Rutkowska, resigns; joins the Golem Project to focus on cloud trustworthiness Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available Google now requires you to enable JavaScript to sign-in as part of its enhanced security features
Read more
  • 0
  • 0
  • 2140

article-image-kernel-4-20-rc1-is-out
Melisha Dsouza
06 Nov 2018
3 min read
Save for later

Kernel 4.20-rc1 is out

Melisha Dsouza
06 Nov 2018
3 min read
Linus Torvalds announced on 4th November that Kernel 4.20-rc1 is tagged and pushed out, and the merge window is closed.  Linux 4.20 brings a lot of prominent changes from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, peer-to-peer PCI memory support, and other new hardware support additions and software features. Here are some of the features of 4.20-rc 1r: 70% of the patch is driver updates including changes in the gpu drivers Arch updates in x86, arm64, arm, powerpc, and the new C-SKY architecture), Updates in the  header files, networking, core mm and kernel, and tooling 4. Tooling has been upgraded as well. The Kernel will have more than 350 thousand lines of new code! The AMD Vega 20 7nm workstation GPU support is now largely squared away for when this graphics card will be released in the months ahead. GPUVM performance improvements for the AMDGPU kernel driver. The Intel DRM driver now has full PPGTT support for Haswell/Ivy/Valley View hardware. Support for the Hygon Dhyana CPUs -the new Chinese data center processors based on AMD Zen. Scheduler improvements that should benefit asymmetric CPU systems like ARM big.LITTLE processors.  Faster context switching on IBM POWER9.  Several Btrfs performance improvements.  Intel 2.5G Ethernet support was added via the new "IGC" driver. Xbox One S controller rumble support along with Logitech high-resolution scrolling and the new Apple Trackpad 2 driver are among the input hardware improvements.  The Linux kernel is now VLA-free for variable length arrays to improve code portability and better performance and security. Speck crypto code was removed due to this crypto algorithm being quite controversial with its roots inside the NSA. The highly anticipated WireGuard secure VPN tunnel is held off until the next cycle. The FreeSync / Adaptive-Sync / HDMI VRR bits are also being held off for DRM until the next cycle. As the merge window closes, there will be some delay in the pull request which will be taken care of in the second week of the merge window. The duration of the merge window is two weeks. Linus is considering making an explicit rule that he will stop taking new pull requests some time during the second week unless users have a good reason for why it was delayed. He also hopes that by the time the next merge window rolls around, there will be a new automation for it, so that everybody just automatically gets notified when their pull request hit mainline. You can head over to Phoronix.com for a detailed list of all the new improvements added to 4.2 0 rc 1. You can also read the change log for further details. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues
Read more
  • 0
  • 0
  • 12249

article-image-microsoft-announces-net-standard-2-1
Prasad Ramesh
06 Nov 2018
3 min read
Save for later

Microsoft announces .NET standard 2.1

Prasad Ramesh
06 Nov 2018
3 min read
After a year of shipping .NET standard 2.0, Microsoft has now announced .NET standard 2.1 yesterday. In all, 3,000 APIs are planned to be included in .NET standard 2.1 and the progress on GitHub has reached 85% completion at the time of writing. The new features in .NET standard 2.1 are as follows. Span<T> in .NET standard 2.1 Span<T> has been added in .NET Core 2.1. It is an array-like type that allows representing managed and unmanaged memory in a uniform way. Span<T> is an important performance improvement since it allows managing buffers in a more efficient way. It supports slicing without copying and can help in reducing allocations and copying. Foundational-APIs working with spans Span<T> is available as a .NET Standard compatible NuGet package. This package does not help extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allowed working with spans. To add span to .NET Standard some companion APIs were added. Reflection emit added in .NET standard 2.1 In .NET Standard 2.1 Lightweight Code Generation (LCG) and Reflection Emit are added. Two new capability APIs are exposed to allow checking for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported). It is also supported if the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). SIMD There has been support for SIMD for a while now. They have been used to speed up basic operations like string comparisons in the BCL. There have been requests to expose these APIs in .NET Standard as the functionality requires runtime support. This cannot be provided meaningfully as a NuGet package. ValueTask and ValueTask<T> In .NET Core 2.1, the biggest feature was improvements to support high-performance scenarios. This also included making async/await more efficient. ValueTask<T> allows returning results if the operation completed synchronously without having to allocate a new Task<T>. In .NET Core 2.1 this has been improved which made it useful to have a corresponding non-generic ValueTask. This allows reducing allocations even for cases where the operation has to be completed asynchronously. This is a feature that types like Socket and NetworkStream now utilize. By exposing these APIs in .NET Standard 2.1, library authors now benefit from these improvements as a consumer as well as a producer. DbProviderFactories DbProviderFactories wasn’t available for .NET Standard 2.0, now it will be in 2.1. DbProviderFactories allows libraries and applications to make use of a specific ADO.NET provider without knowing any of its specific types at compile time. Other changes Many small features across the base class libraries have been added. These include System.HashCode for combining hash codes or new overloads on System.String. There are roughly 800 new members in .NET Core and all of them are added in .NET Standard 2.1. .NET Framework 4.8 will remain on .NET Standard 2.0. .NET Core 3.0 and the upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1. To ensure correct implementation of APIs, a review board is made to sign-off on API additions to the .NET Standard. The board chaired by Miguel de Icaza comprises of representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation. There will also be a formal approval process for new APIs. To know more, visit the Microsoft Blog. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 15478
article-image-harvard-law-school-launches-its-caselaw-access-project-api-and-bulk-data-service-making-almost-6-5-million-cases-available
Savia Lobo
05 Nov 2018
3 min read
Save for later

Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available

Savia Lobo
05 Nov 2018
3 min read
On October 31st, The Library Innovation Lab at the Harvard Law School Library announced the launch of its Caselaw Access Project API and bulk data service. The service makes available almost 6.5 million cases since 1600s till date, thus making the full corpus of published U.S. case law online for anyone to access for free. According to the Harvard Law Today, “Between 2013 and 2018, the Library digitized over 40 million pages of U.S. court decisions, transforming them into a dataset covering almost 6.5 million individual cases.” The Caselaw Access Project API and bulk data service puts this important dataset within easy reach of researchers, members of the legal community and the general public. Adam Ziegler, director of the Library Innovation Lab, in an article in Fortune Magazine, said, “the Caselaw Access Project will be a treasure trove for legal scholars, especially those who employ big data techniques to parse the corpus. It’s an opportunity to reconstruct the law as a data source, and write computer programs to peruse millions of cases.” The CAP API and the bulk data service The CAP API is available at api.case.law and offers open access to descriptive metadata for the entire corpus. API documentation is written in a way to make it easy for both experts and beginners to understand. Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, and Vice Dean for Library and Information Resources said, “Libraries were founded as an engine for the democratization of knowledge, and the digitization of Harvard Law School’s collection of U.S. case law is a tremendous step forward in making legal information open and easily accessible to the public.” Real time implementation of the CAP API and the bulk data service John Bowers, a research associate at Harvard Library Innovation Lab, used the Caselaw Access Project API and bulk data service to uncover the story of Justice James H. Cartwright, the most prolific opinion writer on the Illinois Supreme Court, according to Bower's recent blog post. Bowers said, “In the hands of an interested researcher with questions to ask, a few gigabytes of digitized caselaw can speak volumes to the progress of American legal history and its millions of little stories.” By digitizing these materials, the Harvard Law School Library aimed to provide open, wide-ranging access to American case law, making its collection broadly accessible to nonprofits, academics, practitioners, researchers, and law students. Thus anyone with a smartphone or Internet connection can have an access to this data. Read more about this project in detail, on Caselaw Access Project. Data Theorem launches two automated API security analysis solutions – API Discover and API Inspect Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers
Read more
  • 0
  • 0
  • 10549

article-image-fake-news-is-a-danger-to-democracy-these-researchers-are-using-deep-learning-to-model-fake-news-to-understand-its-impact-on-elections
Amrata Joshi
05 Nov 2018
6 min read
Save for later

Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections.

Amrata Joshi
05 Nov 2018
6 min read
Last month, researchers from the University of Surrey and St Petersburg National Research University of Information Technologies, Mechanics and Optics, published a paper, titled, How to model fake news. The paper states, “Until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced.” Why all the fuss about fake news According to the researchers, “fake news is information that is inconsistent with factual reality. It is information that originates from the ‘sender’ of fake news, is transmitted through a communication channel and is then received, typically, by the general public. Hence any realistic model for fake news has to be built on the successful and well-established framework of communication theory.” The false stories on the internet have made it difficult for many to distinguish what is true from what is false. Fake news is a threat to the democratic process in the US, the UK, and elsewhere. For instance, the existence of Holocaust denialists illustrates how doubts about a major historical event can gain traction with certain individuals. Fake news has become a serious concern to society so much so that it can even endanger the democratic process. The gravity of this issue has become widely acknowledged especially after the 2016 US presidential election and the ‘Brexit’ referendum in the UK on the membership of the European Union. How are researchers trying to model fake news The researchers have presented two approaches for the modeling of fake news in elections, The first is based on the idea of a representative voter, useful to obtain a qualitative understanding of the effects of fake news. The other is based on the idea of an election microstructure, useful for practical implementation in concrete scenarios. The researchers categorized voters into two categories, namely Category I voters and Category II voters. Category I voters are the ones who are unaware of the existence of the fake news. Category II voters are the ones that possess the knowledge that there may be fake news in circulation, but they do not know how many pieces of fake news have been released, or at what time. Approach 1: Using a Representative Voter Framework According to the first approach, those who are influenced by fake news are not viewed as being irrational. But they lack the ability to detect and mitigate the changes caused by the fake news. The transition from the behavioral model of an individual to that of the electorate brings the idea of a ‘representative voter’ whose perception represents the aggregation of the diverse views held by the public at large. The category I voters are an example of the representative voter as they can’t detect or mitigate the fake news. The researchers examine the problem of estimating the release times of fake news that generates a new type of challenge in communication theory. This estimate is required for characterizing a voter. The voter is aware of the potential presence of fake news but is not sure which items of information are fake. The researchers show an illustration of the dynamics of opinion-poll statistics in a referendum in the presence of a single piece of fake news. An application to an election where multiple pieces of news (fake) are released at random times is considered. For instance,  the qualitative behavior of the dynamics of the opinion-poll statistics during the 2016 US presidential election can be replicated by the model suggested by the researchers. Approach 2: Using an ‘election microstructure’ model Further analysis done by the researchers introduce an ‘election microstructure’ model in which an information-based scheme is used to describe the dynamical behavior of individual voters and the resulting collective voting behavior of the electorate under the influence of fake news. The category II voters are an example of election microstructure as these voters have information about the fake news but aren’t aware of the entire pieces of news and the precise time. The modeling framework proposed in this paper uses Wiener’s philosophy. The authors have applied and extended techniques of filtering theory, which is a branch of communication theory that aims at filtering out noise in communication channels, in a novel way to generate models that are well-suited for the treatment of fake news. The mathematical aspect used in election microstructure model is the same as the one in representative voter framework. The only difference is that in election microstructure model the signal in the information process can be transmitted by a sender (e.g., the candidate). Deep learning can be used to solve the problem of fake news According to the researchers, techniques like Deep learning and other related techniques can help in the detection and prevention of fake news. However, to address the issues surrounding the impact, it is important that a consistent mathematical model is developed that describes the phenomena resulting from the flow of fake news. Such a model should be intuitive and tractable, so that model parameter can be calibrated against real data, and so that predictions can be made, either analytically or numerically. In both the approaches, suggested by the researchers, the results illustrate the impact of fake news in elections and referendums. The researchers have further demonstrated that by merely estimating the presence of fake news, an individual is able to largely mitigate the effects. The researchers have described the two categories of voters and they further conclude that the Category II voters know the parameters of the fake news terms. Future Scope The researchers plan to include such optimal release strategies in their models. The election microstructure approach might also get developed further by allowing dependencies between the various factors. Also, the researchers plan to introduce several different information processes reflecting the news consumption preferences of different sections of society. These additions might be challenging but an interesting direction for research might come up. To know more about the modeling techniques for fake news, check out the paper How to model fake news. BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al Facebook is reportedly rating users on how trustworthy they are at flagging fake news Four 2018 Facebook patents to battle fake news and improve news feed
Read more
  • 0
  • 0
  • 13708

article-image-eff-asks-california-supreme-court-to-hear-a-case-on-government-data-accessibility-and-anonymization-under-cpra
Bhagyashree R
05 Nov 2018
3 min read
Save for later

EFF asks California Supreme Court to hear a case on government data accessibility and anonymization under CPRA

Bhagyashree R
05 Nov 2018
3 min read
Last week, the Electronic Frontier Foundation (EFF) issued a letter to support the petition for review filed by Richard Sander and the First Amendment Coalition in Sander v. State Bar of California case. The opinion issued by First District Court of Appeal in August basically changes the California Public Records Act (CPRA) that could prevent California citizens from accessing public data that state and local agencies are generating. The court ruled that in order to de-identify personal information, the State Bar of California has to create “new records” to “recode its original data into new values.” EFF has raised a question that the California Supreme Court has to address: does anonymization of public data amount to a creation of new records under the CPRA? If the court’s opinion of creating new records becomes the standard across California, it will be against the purpose of CPRA. CPRA was signed in 1968, a result of a 15 year long effort to create a general records law for California. Under CPRA, on public request, the governmental records should be shared with the public, unless there is any reason not to do so. This act enables people to understand what the government is doing and prevents government inefficiencies. This act is very important today as a vast amount of digital data is produced and consumed by governments. In a previous hearing the California Supreme Court acknowledged that sharing this data to the public will prove useful: “It seems beyond dispute that the public has a legitimate interest in whether different groups of applicants, based on race, sex or ethnicity, perform differently on the bar examination and whether any disparities in performance are the result of the admissions process or of other factors.” However, when the case proceeded to trial, the petitioners were asked to show how it was possible to de-identify this data. But, according to CPRA, when government refuses to share the records requested by the public, they should show the court that it is not possible to release data and protect private information at the same time. EFF further pointed that in another case, Exide Technologies v. California Department of Public Health, a different superior court in California has ruled the opposite way. The court ruled that the government agency must share the investigations of blood lead levels. But it should be shared in a format that serves the public interest in government transparency while at the same time protecting the privacy interests of individual lead-poisoning patients. This requires California Supreme Court to settle how agencies should handle sensitive digital information under the CPRA. With the increase in the data collected by the state from and about the public, it is important that they give access to this data in order to maintain the transparency. Read the full announcement on EFF's official website. Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Is AT&T trying to twist data privacy legislation to its own favor?
Read more
  • 0
  • 0
  • 11845
article-image-softbank-ceo-says-khashoggi-murder-could-have-an-impact-on-saudi-backed-100-billion-vision-fund-pouring-money-into-silicon-valley
Richard Gall
05 Nov 2018
2 min read
Save for later

SoftBank CEO says Khashoggi murder could have an impact on Saudi-backed $100 billion Vision Fund pouring money into Silicon Valley

Richard Gall
05 Nov 2018
2 min read
The CEO of Japanese investment bank SoftBank, Masayoshi Son, has said that the killing of Turkish journalist Jamal Khashoggi could have an impact on the bank's $100 billion Vision Fund. The Vision Fund, which is pouring money into Silicon Valley companies including Uber and Slack, is backed by Saudi Arabia. Speaking at a quarterly earnings call on Monday 5 November, Son condemned Khashoggi's murder, describing it as an "act against humanity and also journalism and free speech… a horrible and deeply regrettable act." However, he did not commit to any specific action, instead indicating that he, and fellow investors, would be cautious in planning the next round of funding for Vision Fund. Son tempered any suggestion of any form of punitive action saying "Before this tragic case happened, we had already accepted a responsibility to the people of Saudi Arabia to help them manage their financial resources and we can’t all of a sudden drop such responsibility." From this perspective, it doesn't look like Son wants to do anything that could damage the relationship with his largest investors. "As horrible as this event was we cannot turn our backs on the Saudi people as we work to help them in their continued efforts to reform and modernize their society." - Masayoshi Son Khashoggi's death has caused some damage to SoftBank's shares, with its stock falling 20% from October 5 (as reported by CNBC), but it doesn't look like SoftBank and Vision Fund leaders are quite ready to cut ties with Saudi Arabia over the killing. Instead, Son simply said that Vision Fund will "think very carefully" about what funds it accepts from Saudi Arabia in future. Son met with the Saudi Crown Prince last month to express his concerns about the incident, but did not attend the Future Investment Initiative. An important event for Saudi Arabia on the global stage, the FII was overshadowed by Khashoggi's death. For context, other business leaders have been decisive and clear in their relationship with Saudi Arabia. Richard Branson, for example, ended talks with the Saudi Public Investment Fund - the same fund that pushes money into Vision Fund - which was interested in investing in Virgin Galactic and Orbit, Virgin's two space-related projects.
Read more
  • 0
  • 0
  • 8880

article-image-crystal-0-27-0-released
Prasad Ramesh
05 Nov 2018
4 min read
Save for later

Crystal 0.27.0 released

Prasad Ramesh
05 Nov 2018
4 min read
Crystal is a general-purpose, object-oriented programming language with support from over 300 contributors. Last Friday, Crystal 0.27.0 was released. Language changes in Crystal 0.27.0 From Crystal 0.27.0, if the arguments of a method call need to be splitted across multiple lines, the comma must be put at the end of the line just before the line break. This is more in line with other conventional languages. Better handling of stack overflows A program entering an infinite recursion or running out of space in the stack memory is known as a stack overflow. Crystal 0.27.0 ships with a boundary check that allows a better error message on stack overflow. Concurrency and parallelism changes The next releases should start showing parallelism. There are some steps in preparation for that. The Boehm GC has API that enables support for multithreading environment in v7.6.x. From this version of Crystal, GC 7.6.8 or greater is used. As Crystal 0.26.1 was shipped with v7.4.10, the dependency needed to be updated first so that the CI can compile the compiler with the new GC API. Also, refactoring was done to separate the responsibilities of Fiber, Event, Scheduler, and EventLoop. Arithmetic symbols added In Crystal 0.27.0, arithmetic operators like &+, &-, &* were added. They are for additions, subtraction and multiplication with wrapping. In one of the next versions, the regular operators will raise on overflow. This will allow users to trust the result of the operations when reaching the limits of the representable range. Collection names changed In Indexable module and Hash, there are some breaking changes. The Indexable#at was replaced in favor of Indexable#fetch. The API between Indexable and Hash is now more aligned in the latest version. This includes ways to deal with default values in case of a missing key. If no default value is needed, the #[] method must be used. This is true even for Hash, since Hash#fetch(key) was dropped. Time changes There are breaking changes to support cleaner and more portable names. All references to “epoch” should now be replaced to “unix”. Also effectively, Time#epoch was renamed to Time#to_unix, #epoch_ms to #unix_ms, and #epoch_f to #to_unix_f. ISO calendar week numbers are now supported. Changing the time zone while maintaining the wall clock is also easy. File changes Working with temporal files and directories needed the Tempfile class. Now the creation of such files are handled by File.tempfile or File.tempname. This change also tidies up the usage of prefix, suffix and default temp path. Platform support There was an issue detected in Boehm GC regarding while running in Google Cloud because. The fix for this will be released in the next version of GC. Meanwhile, a patch is included in Crystal 0.27.0. There is some preparation for Windows support related to processes, forking, file handlers and arguments. Other fixes include fixing signals between forked processes, and dealing how IO on a TTY behaves in different environments. Networking changes HTTP::Server#bind_ssl was dropped since #bind_tls was introduced. It wasn’t removed to avoid a breaking change. The bindings for OpenSSL were updated to support v1.1.1. Compiler changes Support for annotations inside enums is added. Calling super will by default forward all the method arguments. Even if the call was expanded by macros in this version. When using splats argument the type of values can be restricted. This also goes for the whole Tuple or NamedTuple that is expected as splatted arguments. A bug was present when these restrictions were used, now fixed. For a complete list of changes, visit the Crystal changelog. WebAssembly – Trick or Treat? Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web The D language front-end support finally merged into GCC 9
Read more
  • 0
  • 0
  • 4906
Modal Close icon
Modal Close icon