Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-stable-release-of-cuda-10-0-out-with-turing-support-tools-and-library-changes
Prasad Ramesh
01 Oct 2018
3 min read
Save for later

Stable release of CUDA 10.0 out, with Turing support, tools and library changes

Prasad Ramesh
01 Oct 2018
3 min read
CUDA 10.0 was released mid-September bringing updates to the compiler, tools, and libraries. Support has also been added for the Turing architectures compute_75 and sm_75. Compiler changes in CUDA 10.0 The paths of some compilers have been changed. The CUDA-C and CUDA-C++ compiler—nvcc, is now located in the bin/ directory. nvcc is built on top of the NVVM optimizer, which is built on top of the LLVM compiler infrastructure. If you want to target NVVM directly use the Compiler SDK available in the nvvm/ directory. The following files are compiler-internal and can change without any prior notice. Any files in include/crt and bin/crt Files like include/common_functions.h, include/device_double_functions.h, include/device_functions.h, include/host_config.h, include/host_defines.h, and include/math_functions.h nvvm/bin/cicc bin/cudafe++, bin/bin2c, and bin/fatbinary These compilers are supported as host compilers in nvcc: Clang 6.0 Microsoft Visual Studio 2017 (RTW, Update 8 and later) Xcode 9.4 XLC 16.1.x ICC 18 PGI 18.x (with -std=c++14 mode) Note that, starting with CUDA 10.0, nvcc supports all versions of Visual Studio 2017, previous versions and newer updates. There is a new libNVVM API function called nvvmLazyAddModuleToProgram in CUDA 10.0. This function is to be used for adding the libdevice module along with any other similar modules to a program for making it more efficient. The --extensible-whole-program (or -ewp) option has been added to nvcc. This option can be used to do whole-program optimizations. With this option you can use cuda-device-parallelism features without having to use separate compilation. Warp matrix functions (wmma), first introduced in PTX ISA version 6.0 are now fully supported retroactively from PTX ISA version 6.0 onwards. Tool changes Except for Nsight Visual Studio Edition (VSE) which is installed as a plug-in to Microsoft Visual Studio, the following tools are available in the bin/ directory (). IDEs like nsight (Linux, Mac), Nsight VSE (Windows) Debuggers like cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows) Profilers like nvprof, nvvp, Nsight VSE (Windows) Utilities like cuobjdump, nvdisasm, gpu-library-advisor CUDA 10.0 now includes Nsight Compute, a set of developer tools for profiling and debugging. It is supported on Windows, Linux and Mac. nvprof now supports OpenMP tools interface. NVIDIA Tools Extension API (NVTX) V3 us now supported by the profiler. Changes are also made to the libraries nvJPEG, cuFFT, cuBLAS, NVIDIA Performance Primitives (NPP), and cuSOLVER. CUDA 10.0 has optimized libraries for Turing architecture and there is a new library called nvJPEG for GPU accelerated hybrid JPEG decoding. For a complete list of changes, visit the NVIDIA website. Microsoft Azure now supports NVIDIA GPU Cloud (NGC) NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499
Read more
  • 0
  • 0
  • 15312

article-image-ibm-unveils-worlds-fastest-supercomputer-with-ai-capabilities-summit
Natasha Mathur
11 Jun 2018
3 min read
Save for later

IBM unveils world’s fastest supercomputer with AI capabilities, Summit

Natasha Mathur
11 Jun 2018
3 min read
World’s most powerful and smartest supercomputer, called Summit, has been revealed by IBM and Department of Energy of Oak Ridge National Laboratory. It is capable of performing 200 quadrillion calculations each second, a speed called 200 petaflops which is almost as fast as 7.6 billion people on the planet doing 26 million calculations each second on a basic calculator. Summit was funded back in 2014. It was a part of $325 million Department of Energy program called Coral, but it took quite a few years to develop Summit. Summit is capable of delivering high speed with a new processor, quick storage capacity, internal communications, and a versatile design that can use Artificial Intelligence methods. This makes it quite expensive. Let’s have a look at the features that the Summit Supercomputer entails. Supercomputer and AI integration Dave Turek, vice president of high-performance computing and cognitive systems at IBM said that AI and high-performance computing are not different domains. The two are deeply interconnected to each other which is why Summit will be seen using AI methods for different purposes. Summit will mainly be used for AI development and machine learning. Apart from AI, Oak Ridge will be using Summit to carry out scientific research in subjects such as chemical formula designing, studying links between cancer and genes on a large scale, fusion energy investigation, universe research using astrophysics and simulation of changing Earth’s climate. Super Big Supercomputer Source: Oak Ridge National Laboratory Summit consists of 4,608 interconnected computer servers, housed in huge refrigerator-sized cabinets. It takes up an eighth of an acre, which, to put into perspective is the size of two tennis courts. Peak energy consumption of Summit is 15 megawatts which have the capacity to power more than 7,000 homes. Each server has two IBM Power9 chips at 3.1 GHz. Each chip has 22 cores running in parallel and six Nvidia Tesla V100 GPUs each. Each server consists of 1.6 terabytes of memory and data can be saved at 2.2 terabytes each second on a storage system of 250-petabyte which is 1000 times the storage capacity of a high-end laptop. Supercomputer performance measure Supercomputers’ performance is measured in terms of a benchmark called Linpack in the top 500 list and China's Sunway TaihuLight grabs the highest Linpack score of 93 petaflops. But Turek feels that measuring the value of a machine based on a single figure of merit is not that accurate; rather a machine should be able to scale on real applications. This is IBM’s attempt to exascale in the future. With Summit, IBM is quite convinced that it can reach its goal to build a system capable of performing a quintillion calculations per second (five times that of Summit). Along with Summit, there is also work being done on a less powerful computer, Sierra. Both are scheduled to go online sometime this year. This will take U.S’s arsenal of supercomputers a step forward in terms of competition. Lately, the top spots have been held by other countries, but Summit can become the United States’ chance to stay ahead in the game by retaking the lead. PyCon US 2018 Highlights: Quantum computing, blockchains, and serverless rule! Quantum A.I. : An intelligent mix of Quantum+A.I. Q# 101: Getting to know the basics of Microsoft’s new quantum computing language  
Read more
  • 0
  • 0
  • 15282

article-image-france-and-germany-reaffirm-blocking-facebooks-libra-cryptocurrency
Sugandha Lahoti
16 Sep 2019
4 min read
Save for later

France and Germany reaffirm blocking Facebook’s Libra cryptocurrency

Sugandha Lahoti
16 Sep 2019
4 min read
Update Oct 14: After Paypal, Visa, Mastercard, eBay, Stripe, and Mercado Pago have also withdrawn from Facebook's Libra Association. These withdrawals leave Libra with no major US payment processor denting a big hole in Facebook's plans for a distributed, global cryptocurrency. David Marcus, Libra chief called this 'no great news in the short term'. https://twitter.com/davidmarcus/status/1182775730427572224 Update Oct 4: After countries, PayPal, a corporate backer is backing away from Facebook’s Libra Association the company announced on October 4. “PayPal has made the decision to forgo further participation in the Libra Association at this time and to continue to focus on advancing our existing mission and business priorities as we strive to democratize access to financial services for underserved populations,” PayPal said in a statement. In a joint statement released last week, Friday, Facebook and Germany have agreed to block Facebook’s Libra In Europe. France had been debating banning Libra for quite some time now. On Thursday at the OECD Conference 2019 on virtual currencies, French Finance Minister Bruno Le Maire told attendees that he would do everything in his power to stop Libra. He said, “I want to be absolutely clear: in these conditions, we cannot authorize the development of Libra on European soil.” Le Maire also was in favor of Eurozone issuing its own digital currency solutions, commonly dubbed ‘EuroCoin’ in the press. In a joint statement released Friday, the two governments of France and Germany wrote, “As already expressed during the meeting of G7 Finance Ministers and Central Bank’s Governers in Chantilly in July, France and Germany consider that the Libra project, as set out in Facebook’s blueprint, fails to convince that risks will be properly addressed. We believe that no private entity can claim monetary power, which is inherent to the sovereignty of Nations”. In June, Facebook had announced its ambitious plans to launch its own cryptocurrency, Libra in a move to disrupt the digital ecosystem. Libra’s launch alarmed certain experts who foresee this as a control shift of the economy from governments and their central banks to privately-held tech giants. Co-founder of Chainspace, Facebook’s blockchain acquisition said that he was “concerned about Libra’s model for decentralization”. He added, “My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” The US administration is also worried about a non-governmental currency in the hands of big tech companies. Early July, the US Congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. In an interview to Bloomberg, Mu Changchun, deputy director of the People’s Bank of China’s payments department wrote, “As a convertible crypto asset or a type of stablecoin, Libra can flow freely across borders, and it “won’t be sustainable without the support and supervision of central banks.” People enthusiastically shared this new development on Twitter. “Europe is leading the way to become the blockchain hub” https://twitter.com/AltcoinSara/status/1172582618971422720 “I always thought China would be first off the blocks on regulating Libra.” https://twitter.com/Frances_Coppola/status/1148420964264370179 “France blocks libra and says not tax for crypto to crypto exchanges. America still clinging on and stifling innovation hurting investors and developers” https://twitter.com/cryptoMD45/status/1172228992532983808 For now, a working group has been tasked by the G7 Finance Ministers to analyze the challenges posed by cryptocurrencies. Its final report will be presented in October. More interesting Tech News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Margrethe Vestager, EU’s Competition Commissioner gets another term and expanded power to make “Europe fit for the digital age” Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server
Read more
  • 0
  • 0
  • 15270

article-image-after-postgresql-digitalocean-now-adds-mysql-and-redis-to-its-managed-databases-offering
Savia Lobo
20 Aug 2019
2 min read
Save for later

After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering

Savia Lobo
20 Aug 2019
2 min read
Today, DigitalOcean, the cloud for developing modern apps, announced that it has introduced Managed Databases for MySQL and Redis, the popular open-source relational and in-memory databases, respectively. These offerings eliminate the complexity involved in managing, scaling and securing database infrastructure, and instead allow developers to focus on building apps. DigitalOcean’s Managed Databases was launched in February--with PostgreSQL as its first offering service--and allows developers to create fully-managed database instances in the cloud. Managed Databases provides features such as worry-free setup and maintenance, free daily backups with point-in-time recovery, standby nodes with automated failovers, end-to-end security, and scalable performance. These new offerings build upon the existing support for PostgreSQL, providing worry-free maintenance for three of the most popular database engines. DigitalOcean’s Senior Vice President of Product Shiven Ramji said, “With the additions of MySQL and Redis, DigitalOcean now supports three of the most requested database offerings, making it easier for developers to build and run applications, rather than spending time on complex management.”  “The developer is not just the DNA of DigitalOcean, but the reason for much of the company’s success. We must continue to build on this success and support developers with the services they need most on their journey towards simple app development,” he further added. DigitalOcean selected MySQL and Redis as the next offerings for its Managed Databases service due to overwhelming demand from its customer base and the developer community at large. DigitalOcean’s Managed Databases offerings for MySQL and Redis are available in New York, Frankfurt and San Francisco data center regions, with support for additional regions being added over the next few weeks. To know more about this news in detail, head over to Digital Ocean’s official website. Digital Ocean announces ‘Managed Databases for PostgreSQL’ DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Limited Availability of DigitalOcean Kubernetes announced!
Read more
  • 0
  • 0
  • 15265

article-image-github-introduces-experiments-a-platform-to-share-live-demos-of-their-research-projects
Bhagyashree R
19 Sep 2018
2 min read
Save for later

GitHub introduces ‘Experiments’, a platform to share live demos of their research projects

Bhagyashree R
19 Sep 2018
2 min read
Yesterday, GitHub introduced the Experiments platform for sharing demonstrations of their research projects and the idea behind them. With this platform, it aims to give the end users “insight into their research and inspire them to think audaciously about the future of software development”. Why has GitHub introduced ‘Experiments’? Just like Facebook and Google, GitHub regularly conducts research in machine learning, design, and infrastructure. The resultant products are rigorously evaluated for stability, performance, and security. If these products meet the success criteria for product release, they are then released for end users. Experiments will help GitHub share details about their research as they happen. ‘Semantic Code Search’: The first demo published on Experiments The GitHub researchers also published their first demo of an experiment called Semantic Code Search. This system helps you search code on GitHub using natural language. How does Semantic Code Search work? The following diagram shows how Semantic Code Search works: Source: GitHub Step1: Learning representations of code In this step, a sequence-to-sequence model is trained to summarize code by supplying (code, docstring) pairs. The docstring here is the target variable the model is trying to predict. Step 2: Learning representations of text phrases Along with learning representations of code, the researchers wanted to find a suitable representation for short phrases. To achieve this, they trained a neural language model by leveraging the fast.ai library. Using the concat pooling approach, the representations of phrases were extracted from the trained model by summarizing the hidden states. Step 3: Mapping code representations to the same vector-space as text In this step, the code representations learned from step 1 were mapped to the vector space of text. To accomplish this they fine-tuned the code-encoder. Step 4: Creating a semantic search system The last step is to bringing everything together to create a semantic search mechanism. The vectorized version of all code is stored in a database, and nearest neighbor lookups are performed to a vectorized search query. You can read the official announcement at GitHub’s blog. To read in more detail about Semantic Code Search, check out the researchers’ post and also try it on Experiments. Packt’s GitHub portal hits 2,000 repositories GitHub parts ways with JQuery, adopts Vanilla JS for its frontend Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements
Read more
  • 0
  • 0
  • 15195

article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 15177
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-mongodb-acquires-mlab-to-transform-the-global-cloud-database-market-and-scale-mongodb-atlas
Natasha Mathur
10 Oct 2018
2 min read
Save for later

MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas

Natasha Mathur
10 Oct 2018
2 min read
MongoDB, Inc, a leading free, and open source general purpose database platform, announced, yesterday, that it is acquiring mLab, a San Francisco-based cloud database service. With this acquisition, MongoDB aims to deepen its relationships with other developer-centric startup communities. The mLab team has been very successful in maintaining these relationships with the startup communities in the past. “Over the years, mLab and MongoDB have explored ways to work more closely together. As we have gotten to know each other, we have found we share a similar vision and both believe in an engineering culture based on intellectual honesty, hard work, and respect. We were excited by the idea of working together, as part of one team”, said Will Shulman, CEO, mLab. The acquisition will be closing in the fourth quarter of MongoDB’s fiscal year which ends on January 31, 2019. This is subject to the satisfaction of customary closing conditions. Currently, mLab has one million databases hosted on its platform across both free and paid tiers. Shulman hopes that given the growing trend towards building software systems and deploying those systems in the cloud, there will be enormous market opportunities for global cloud databases. These opportunities will then be well powered by MongoDB in ways that other database technologies cannot. Another goal that MongoDB plans to achieve with this acquisition is the ability to scale MongoDB Atlas at an even faster pace. “mLab has been providing a compelling service to their customers for seven years and we are delighted to bring this talented team into the MongoDB family”, stated Dev Ittycheria, President & CEO, MongoDB. MongoDB Atlas is a leading general-purpose database that operates as an independent, global cloud service. Atlas is great handling all the complexities involved with deploying, managing, and scaling MongoDB on your preferred cloud provider such as Amazon Web Services, Microsoft Azure, etc. It also comes with built-in security practices and automation of the time-consuming administration tasks. “We are very excited to come together with MongoDB to modernize the way companies store and manage their most valuable asset -- their data”, said Shulman. For more information, check out the official MongoDB blog. MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more MongoDB going relational with 4.0 release
Read more
  • 0
  • 0
  • 15167

article-image-facebook-instagram-and-whatsapp-suffered-a-major-outage-yesterday-people-had-trouble-uploading-and-sending-media-files
Sugandha Lahoti
04 Jul 2019
3 min read
Save for later

Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files

Sugandha Lahoti
04 Jul 2019
3 min read
Facebook and it’s sibling platforms Instagram and Whatsapp suffered a major outage most of yesterday relating to image display. The issues started around 3:04 pm Wednesday, PT. Users were unable to send and receive images, videos and other files over these social media platforms. This marks the third major outage of Facebook and its family of apps this year. Source: Down detector Instagram users reported that their feed might load, but they were unable to post anything new into it. Doing so brings up an error message indicating that "Photo Can't Be Posted", according to users experiencing the problems. For Whatsapp, texts were going through, but for videos and images, users saw a message reading "download failed" and the content did not arrive. https://twitter.com/Navid_kh/status/1146419297385713665 Issues were particularly focused on the east coast of the US, according to the tracking website Down Detector. But they were reported across the world, with significant numbers of reports from Europe, South America and East Asia. More than 14,000 users reported issues with Instagram, while more than 7,500 and 1,600 users complained about Facebook and WhatsApp noted Down Detector. What was the issue? According to ArsTechnica, the issue was because of a bad timestamp data being fed to the company's CDN in some image tags. All broken images had different timestamp arguments embedded in the same URLs. Loading an image from fbcdn.net with bad "oh=" and "oe=" arguments—or no arguments at all—results in an HTTP 403 "Bad URL timestamp". Interestingly, because of this image outage people were able to see how Facebook's AI automatically tags photos behind the scenes. The outage stopped social-media images from loading and left in their place descriptions like: "image may contain: table, plant, flower, and outdoor" and "image may contain: tree, plant, sky." https://twitter.com/zackwhittaker/status/1146456836998144000 https://twitter.com/jfruh/status/1146460397009924101 According to Reuters who talked to Facebook representatives, “During one of our routine maintenance operations, we triggered an issue that is making it difficult for some people to upload or send photos and videos,” Facebook said. Around 6 PM PT services were restored, with Facebook and Instagram both tweeting that the problems are resolved. There was no response from Whatsapp’s twitter account about the acknowledgement or resolution of the outage. https://twitter.com/instagram/status/1146565551520534528 https://twitter.com/facebook/status/1146571015872552961   Twitter also suffered an unexplained downtime in its direct messaging service. https://twitter.com/TwitterSupport/status/1146447958952439809 The latest string of outages follow a recurring trend of issues hitting social media over the past six months. It started in March when Facebook family of apps were hit with a  14 hours outage, longest in its history. Then in June, Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services. This month Verizon caused a major internet outage affecting Amazon, Facebook, CloudFare among others. In the same week, Cloudflare suffered it’s 2nd major internet outage. Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule. Why did Slack suffer an outage on Friday? Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms
Read more
  • 0
  • 0
  • 15142

article-image-jupyterlab-v0-32-0-releases
Pravin Dhandre
19 Apr 2018
2 min read
Save for later

JupyterLab v0.32.0 releases

Pravin Dhandre
19 Apr 2018
2 min read
Jupyterlab announced another series of beta release, v0.32.0 with numerous breaking changes in enhancements, bug fixes, and rectifications. This announcement follows closely at the heels of the initial JupyterLab beta release announcement made just two months ago. Jupyterlab is steadily approaching 1.0 release quickly with exciting components and features such as notebook, terminal, text editor, powerful UI and various third party extensions. With the rapid progress by the Jupyter team putting their entire focus on this project, the full and final release  of v1.0 is expected by June-July of this year. Let’s have a quick look at what’s new in this round of release. Major features and improvements New feature additions Added better provision for handling corrupted and invalid state databases. New option created to save documents automatically. Added more commands on scrolling, kernel restart in the notebook context menu. Supports proactive checking for completion metadata from kernels. Added new separate "Shutdown all" button in the Running panel for Terminals/Notebooks. Added support to rotate, flip, and invert images in the image viewer. Added support to display kernel banner in console while kernel restart. Improvements Performance improvements wherein non-focused documents poll the server less. Performance improvements for rendering text streams, especially around progress bars. Major performance improvements for viewing large CSV files. Context menu always visible in the file browser, even for an empty directory. Ability to handle asynchronous comm messages in the services library more correctly. Bug Fixes and Miscellaneous changes Fixed file dirty status indicator. Changed keyboard shortcut for singled-document-mode. “Restart Kernel" cancellation task functions correctly. Fixed UI with better error handling. You can download the source code to access all the exciting features of JupyterLab v0.32.0.
Read more
  • 0
  • 0
  • 15138

article-image-apache-flink-founders-data-artisans-could-transform-stream-processing-with-patent-pending-tool
Richard Gall
04 Sep 2018
2 min read
Save for later

Apache Flink founders data Artisans could transform stream processing with patent-pending tool

Richard Gall
04 Sep 2018
2 min read
data Artisans, the stream processing team behind Apache Flink, today unveiled data Artisans Streaming Ledger at the Flink Forward Conference in Berlin. Streaming Ledger, according to data Artisans "extends the scope of stream processing with fast, serializable ACID transactions directly on streaming data." This is significant because previously performing serializable transactions across streaming data - without losing data consistency - was impossible. If data Artisans are right about Streaming Ledger that's not only good news for them, it's good news for developers and system architects struggling to manage streaming data within their applications. Read next: Say hello to streaming analytics How Streaming Ledger fits into a data streaming architecture Streaming Ledger is essentially a new component within data Artisans existing data streaming architecture, which includes Apache Flink. [caption id="attachment_22285" align="aligncenter" width="607"] The architecture of data Artisans Platform (via data-artisans.com)[/caption] Stephan Ewen, co-founder and CTO at data Artisans said that "guaranteeing serializable ACID transactions is the crown discipline of data management." He also claimed that Streaming Ledger does "something that even some large established databases fail to provide. We are very proud to have come up with a way to solve this problem for real time data streams, and make it fast and easy to use." Read next: Apache Flink version 1.6.0 released! How Streaming Ledger works It's not easy for streaming technologies to process event streams across shared states and tables. That's why streaming is so tough (okay, just about impossible) when used with relational databases. However, Streaming Ledger works by isolating tables from concurrent changes as they are modified in transactions. This helps to ensure consistency is maintained across your data, as you might expect in a really robust relational database. [caption id="attachment_22287" align="aligncenter" width="1263"] data Artisans Streaming Ledger functionality (via data-artisans.com)[/caption] data Artisans have also produced a white paper that details how Streaming Ledger works as well as further information about why you want to use it. You need to provide details to gain access, but you can find it here.
Read more
  • 0
  • 0
  • 15137
article-image-say-hello-to-faster-a-new-key-value-store-for-large-state-management-by-microsoft
Natasha Mathur
20 Aug 2018
3 min read
Save for later

Say hello to FASTER: a new key-value store for large state management by Microsoft

Natasha Mathur
20 Aug 2018
3 min read
The Microsoft research team announced a new key-value store named FASTER at SIGMOD 2018, in June. FASTER offers support for fast and frequent lookups of data. It also helps with updating large volumes of state information which poses a problem for cloud applications today. Let’s consider IoT as a scenario. Here billions of devices report and update state like per-device performance counters. This leads to applications underutilizing resources such as storage and networking on the machine. FASTER helps solve this problem as it makes use of the temporal locality in these applications for controlling the in-memory footprint of the system. According to Microsoft, “FASTER is a single-node shared memory key-value store library”. A key-value store is a NoSQL database which makes use of simple key/value method for data storage. It consists of two important innovations: A cache-friendly, concurrent and latch-free hash index. It maintains logical pointers to records in a log. The FASTER hash index refers to an array of cache-line-sized hash buckets, each with 8-byte entries to hold hash tags. It also consists of logical pointers to records that have been stored separately. A new concurrent and hybrid log record allocator. This helps in backing the index which includes fast storage (such as cloud storage and SSD) and main memory. What makes FASTER different? The traditional key-value stores make use of log-structured record organizations. But, FASTER is different as it has a hybrid log that combines log-structuring with read-copy-updates (good for external storage) and in-place updates (good for in-memory performance). So, the hybrid log head which lies in storage uses a read-copy-update whereas the hybrid log tail part in main memory uses in-place updates. There is a read-only region in memory that lies between these two regions. It provides the core records another chance to be copied back to the tail. This captures temporary location of the updates and allows a natural clustering of hot records in memory. As a result, FASTER is capable of outperforming even pure in-memory data structures like the Intel TBB hash map. It also performs far better than today’s popular key-value stores and caching systems like the RocksDB and Redis, says Microsoft. Other than that, FASTER also provides support for failure recovery as it consists of a recovery strategy in place which helps bring back the system to a recent consistent state at low cost. This is different than the recovery mechanism in traditional database systems as it does not involve blocking or creating a separate “write-ahead log”. For more information, check out the official research paper. Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft announces the general availability of Azure SQL Data Sync  
Read more
  • 0
  • 0
  • 15132

article-image-in-5-years-machines-will-do-half-of-our-job-tasks-of-today-1-in-2-employees-need-reskilling-upskilling-now-world-economic-forum-survey
Bhagyashree R
20 Sep 2018
5 min read
Save for later

In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now - World Economic Forum survey

Bhagyashree R
20 Sep 2018
5 min read
Earlier this week, World Economic Forum published a report, The Future of Jobs Report 2018, which is based on a survey they conducted to analyze the trends in the job sector in the period 2018-2022. This survey considered 20 economies and 12 industry sectors. The main focus of this survey was to better understand the potential of new technologies, including automation and algorithms, to create new high-quality jobs and improve the existing job quality and productivity of human employees. Key findings from The Future of Jobs survey #1 Technological advances will drive business growth By 2022, we will see four key technologies enabling business growth: high-speed mobile internet, artificial intelligence, widespread adoption of big data analytics, and cloud technology. 85% of the companies are expected to invest in big data analytics. A large share of companies are also interested in adopting internet of things, app- and web-enabled markets, and cloud computing. Machine learning and augmented and virtual reality will also see considerable business investment. Source: World Economic Forum #2 Acceptance of robots varies across sectors The demand for humanoid robots will be limited in this period, as businesses are more gravitating towards robotics technologies that are at or near commercialization. These technologies include stationary robots, non-humanoid land robots and fully automated aerial drones, in addition to machine learning algorithms and artificial intelligence. Majority of companies (37% to 29%) are showing interest in adopting stationary robots. Oil & Gas industry report the same level of demand for stationary, aerial, and underwater robots. Financial Services industry is planning the adoption of humanoid robots in the period up to 2022. #3 Towards equal work distribution between machines and humans Almost 50% of the companies are expecting that by 2022 automation will lead to some reduction in workforce. While 38% of the companies are more likely to shift their workforce to new productivity-enhancing roles. And more than quarter believe that automation will lead to the creation of new roles in their enterprise. The period from 2018-2022 will see a significant shift in the division of work between humans, machines, and algorithms. Currently, across all the 12 industries surveyed, 71% of the task hours are performed by humans, compared to 29% by machines. By 2022, this average will this average is expected to have shifted to 58% task hours performed by humans and 42% by machines. Source: World Economic Forum Read also: 15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce #4 Emergence of new job opportunities By 2020, with technological advancements newly emerging job roles and opportunities are expected to grow from 16% to 27% of the employee base. The job roles that are affected by technological obsolescence are set to decrease from 31% to 21%. The survey also revealed that there will be a decline of 0.98 million jobs and a gain of 1.74 million jobs. The professions that will enjoy increasing demand include Data Analysts and Scientists, Software and Applications Developers, and Ecommerce and Social Media Specialists. As you can already tell, these are the roles that are significantly based on and enhanced by the use of technology. Read also: Highest Paying Data Science Jobs in 2017 Roles that leverage ‘human' skills are also expected to grow, such as Customer Service Workers, Sales and Marketing Professionals, Training and Development, People and Culture, and Organizational Development Specialists as well as Innovation Managers. Source: World Economic Forum #5 Upskilling and reskilling is the need of the hour With so many businesses embracing technological advancements for business growth, around 54% of the employees will require significant reskilling and upskilling. Out of these 35% are expected to require additional training of up to six months, 9% will require reskilling lasting six to 12 months, while 10% will require additional skills training of more than a year. The key skills that are expected to grow by 2022 include analytical thinking and innovation as well as active learning and learning strategies. Along with these skills, there is an increase in demand for technology design and programming. This indicates a growing demand for various forms of technology competency identified by employers surveyed for this report. Read also: A non programmer’s guide to learning Machine learning Employers are also looking for “human” skills in their employees which include creativity, originality and initiative, critical thinking, persuasion and negotiation. Social influence and emotional intelligence leadership will also see an outsized increase in demand. Read also: 96% of developers believe developing soft skills is important Source: World Economic Forum #6 How companies are planning to address skills gaps To address the skill gaps widened by the adoption of new technologies, companies have highlighted three future strategies. They expect to hire wholly new permanent staff already possessing skills relevant to new technologies, seek to automate the work tasks concerned completely, and retrain existing employees. Read also: Stack skills, not degrees: Industry-leading companies, Google, IBM, Apple no longer require degrees Most companies are considering the option of hiring new permanent staff with relevant skills. A quarter of them are undecided to pursue the retraining of existing employees and two-thirds expect their employees to acquire these skills during their transition period. Between one-half and two-thirds are likely to turn to external contractors, temporary staff and freelancers to address their skills gaps. Source: World Economic Forum Read also: Why learn machine learning as a non-techie? The advancements in technology will come with its own pros and cons. Automation and work augmentation in business will result in decreasing the demand of some of the current job roles. At the same time, this will also open up more opportunities for an entirely new range of livelihood options for workers. To be prepared for this shift, with the help of our employers, we need to upskill ourselves with an agile mindset. To know more in detail, check out the report published by World Economic Forum: The Future of Jobs 2018. Survey reveals how artificial intelligence is impacting developers across the tech landscape Why TensorFlow always tops machine learning and artificial intelligence tool surveys What the IEEE 2018 programming languages survey reveals to us
Read more
  • 0
  • 0
  • 15124

article-image-tidb-open-sources-its-mysql-mariadb-compatible-data-migration-tool
Natasha Mathur
22 Jan 2019
2 min read
Save for later

TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool

Natasha Mathur
22 Jan 2019
2 min read
TiDB, an open source cloud-native distributed database, made its data migration platform (DM)  available as open source today. Data Migration (DM) by TiDB is an integrated data synchronization task management platform that provides support for full data migration as well as the incremental data migration from MySQL/MariaDB into TiDB.  It helps reduce the operations cost and also make the troubleshooting process easy. The Data Migration tool by TiDB comes with three major components, namely, DM-master, DM-worker, and dmctl. Data Migration DM Master handles and schedules the operation of all the data synchronization related tasks. It stores the topology information of the DM cluster and keeps a track on the running state of DM worker processes and data synchronization tasks. DM-worker, on the other hand, handles the execution of only specific data synchronization tasks. It manages the storage of configuration information of the data synchronization subtasks and also monitors their running state. The third component in DM tool, called, dmctl is a command line tool that helps control the DM cluster. It creates/updates/drops data synchronization tasks. So, it checks the running state of these tasks, handles any errors that occur during these tasks, and also verifies their configuration correctness. DM is licensed under the Apache License, Version 2.0, allowing users to freely use, and modify the platform. This will also allow users to contribute new features or track any bug fixes to make the platform better for everyone. For more information, check out the official DM tool documentation. FoundationDB open-sources FoundationDB Record Layer with schema management, indexing facilities and more Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Facebook open sources Spectrum 1.0.0, an image processing library for better mobile image production
Read more
  • 0
  • 0
  • 15120
article-image-singularitynet-and-mindfire-unite-talents-to-explore-artificial-intelligence
Prasad Ramesh
09 Aug 2018
3 min read
Save for later

SingularityNET and Mindfire unite talents to explore artificial intelligence

Prasad Ramesh
09 Aug 2018
3 min read
SingularityNET to collaborate with Mindfire to team up their best talents and work on something similar to Mindfire Mission 1. The mission was dedicated to “cracking the brain code”, to understand more about how the human brain works. They hope to combine their talents and work on AI services, education and also towards combining their blockchain tokens. SingularityNET is an AI solution platform powered by a decentralized protocol that lets anyone create, share, and monetize on its AI services. Mindfire is focused on understanding the building blocks of AI which forms a human level intelligence. Mindfire believes “The partnership between SingularityNET and the Mindfire Foundation will grow the talent pool of both entities and increase the productivity not only of AI services but also of the number of relevant insights in human-level artificial intelligence.“ Together they plan to target three key areas: Talent: Choose the best talents, leading minds from the pool in different AI disciplines AI services: This focuses on building a decentralized hub for AI services where the talents from Mindfire can work on and use SingularityNET’s platform AI education: Implementing practical courses and lectures in AI and its business applications and opportunities Ben Goertzel, founder of SingularityNET says, “The SingularityNET decentralized AI platform is open to any possible approach to AI or any complex systems”. Even though Ben’s approach is less “brain-focused” unlike Mindfire, he believes there is great potential for collaboration. Mindfire with SingularityNET has launched a call for applications for their successive missions, Mission-2 and Mission-3. These missions focus on prototype development including drones, robots, and other carrier systems for AI. Mission-2 is planned for November 11-16, 2018. There are 10 planned missions to come. Head on to their website to learn more and apply! Mindfire also announced the publication of a completely revised white paper, that is set for release on August 15, 2018. The revised white paper will detail the key functionality of the ERC20 protocol based token, MFT. It will also elaborate on Mindfire’s business model and outline the Mindfire token sale including a reward campaign. SingularityNET tweeted on July 31: https://twitter.com/mindfire_global/status/1024275219391959040 Mindfire’s talents can collaborate to create applications or products on the SingularityNET platform and leverage it. There’s also potential for SingularityNET to connect Mindfire with people building applications or doing research. SingularityNET can direct various AI problem areas at Mindfires talent. Ben believes they may be able to build mechanisms where the AGI token of SingularityNET can convert automatically to Mindfire’s MFT Token. The focus here is on the exchangeability of these tokens to promote the development of SingularityNET’s decentralized protocol. This is a future area of collaboration that can lead to a two way incentivization benefiting both companies. You can view Ben, CEO and Chief Scientist of SingularityNET express his collaborative visions in this YouTube video. To know more visit the official Mindfire blog. Attention designers, Artificial Intelligence can now create realistic virtual textures Top languages for Artificial Intelligence development 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 15107

article-image-google-employees-plan-a-walkout-to-protest-against-the-companys-response-to-recent-reports-of-sexual-misconduct
Natasha Mathur
30 Oct 2018
5 min read
Save for later

Google employees plan a walkout to protest against the company’s response to recent reports of sexual misconduct

Natasha Mathur
30 Oct 2018
5 min read
It was only last week when a report by The New York Times brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. Now, more than 200 engineers at Google are organizing a “women’s walk” walkout this week to protest against the company's response to the reports of sexual misconduct, as per Buzzfeed news. According to the report by the New York Times, after Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google, he was asked to leave by former Google CEO, Mr.Page, but received $90 million as an exit package. He also received a high profile well-respected farewell by Google in October 2014. But Rubin isn’t the only one, at least four senior executives have been protected by Google in the past, despite them being accused of sexual misconduct, as per the NY Times report. Andy Rubin spoke out about the allegation on Twitter where he denied the NY Times report: https://twitter.com/Arubin/status/1055632398509985792 https://twitter.com/Arubin/status/1055632399172755456 As per the Buzzfeed reports, Google executives had hosted an all-hands meeting last Thursday, during which they tried explaining their behavior towards Rubin and apologized to employees. Google CEO Sundar Pichai also sent an email to all Google employees on Thursday clarifying how the company has fired 48 people over the last two years for sexual harassment where 13 of them were “senior managers and above”. He also mentioned how none of the accused employees received any exit packages. “We are dead serious about making sure we provide a safe and inclusive workplace. We want to assure you that we review every single complaint about sexual harassment or inappropriate conduct, we investigate and we take action”, read the email. The protest is a response to Google’s handling of sexual misconduct within the workplace in the recent past, that employees found as inadequate. Moreover, Google employees participating in the planned protest are dissatisfied that senior executives such as Drummond, Chief Legal Officer, Alphabet, and Chairman, CapitalG, mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continue to work in highly placed positions at Google and have not faced any real punitive action by Google for their action. In April this year, Google employees protested against Project Maven, with petitions, and better demands for more transparency within the organization. The demands of the upcoming protest haven’t been made specified yet, but, shares similar sentiments. The planning for the walkout was done on an internal online forum by the Google employees to map out the details regarding the protest. By yesterday, that post had upvotes, as per a current Google employee who wishes to remain anonymous. The day and timing of the walkout haven’t been fixed yet but is likely to take place this Thursday, as reported by BuzzFeed. One of the Google employee, who wishes to remain anonymous, told BuzzFeed, 'Personally, I’m furious. I feel like there’s a pattern of powerful men getting away with awful behavior towards women at Google‚ or if they don’t get away with it, they get a slap on the wrist, or they get sent away with a golden parachute, like Andy Rubin. Public reaction towards the protest is largely positive: https://twitter.com/iamjono/status/1057120231074611200 https://twitter.com/womensmarch/status/1057067265324183552 https://twitter.com/ryancarson/status/1057003377777930240 Our take on this development If this protest manages to get a response from Google on the veracity of the claims made by the NYT article, it would be a good place to start healing. Openly acknowledging issues is the first step towards working on them. The protest could be more effective had the organizers has a clear set of goals to achieve from the walkout. Currently, it appears more like an emotional response to the revelation than as a way to move the company in the right direction on the topic of making the workplace safe and treating everyone fairly. Of late Google, employees seem to increasingly place the role of the companies moral compass on contentious and sensitives topics. Holding Google accountable for its role in enabling workplace misconduct is a worthy cause to stand up for. However, doing this via continuous protests or through media leaks does not seem to be an effective long-term approach to dealing with organizational issues - for both employees and for Google. There is the risk of employees becoming jaded and distrusting or management simply taking the easy way out by choosing to leave behind those that don’t align with its new vision only to become a monolithic thinking machine.   Frequent employee protests is a symptom of a deeper value-misalignment problem that Google must reflect on. Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 15080
Modal Close icon
Modal Close icon