Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-slack-confidentially-files-to-go-public
Amrata Joshi
05 Feb 2019
3 min read
Save for later

Slack confidentially files to go public

Amrata Joshi
05 Feb 2019
3 min read
Yesterday, Slack Technologies confidentially filed with the Securities and Exchange Commission to go public in the U.S. for listing its shares publicly. The company would go with a direct listing in the stock market and might get into a race with Lyft, Uber, and Airbnb to become the next major company to use the non-traditional method for an Initial Public Offering (IPO) after Spotify. Last year Spotify decided to sell its shares in an IPO directly to regular people rather than to a pre-chosen group of its bankers’ friends in a move which is known as a direct listing. A company that opts for direct listing doesn’t create or sell any new stock and therefore doesn’t raise any money but current shareholders sell their preexisting shares. Slack had about $900 million in cash on its balance sheet as of October 2018, according to The Information. Last year, in December, Slack hired Goldman Sachs to lead its IPO as an underwriter and was seeking a valuation of more than $10 billion in its IPO as reported by Reuters. According to a report by Crunchbase, Slack has raised about $1 billion so far. The global growth concerns and U.S.-China trade issues have an impact on the equity markets. Many companies have pulled IPOs from the markets, stating “unfavorable economic conditions”, with the number rising since the U.S. government shutdown. It would be interesting to see what step the company takes. According to few users, this move will be beneficial for Slack. One of the comments on HackerNews reads, “Nothing is wrong with the market: Slack may have decided that this is the best way for them to create liquidity. There is also a cap (2000) on the number of shareholders a company can have before they have to abide by what amounts to the same reporting requirements as a publicly traded company. Slack also get the advantage of the usual market pop of acquiring companies share prices that usually amounts to a significant % of the cash value of the transaction.” According to few others, the company will have huge leverage with stock compensation it would be able to buy other companies because of the access to funding. To know more about this news, check out the official press release. Slack has terminated the accounts of some Iranian users, citing U.S. sanctions as the reason Airtable, a Slack-like coding platform for non-techies, raises $100 million in funding Atlassian sells Hipchat IP to Slack
Read more
  • 0
  • 0
  • 13368

article-image-mozilla-updates-firefox-focus-for-mobile-with-new-features-revamped-design-and-geckoview-for-android
Natasha Mathur
03 Oct 2018
3 min read
Save for later

Mozilla updates Firefox Focus for mobile with new features, revamped design, and Geckoview for Android

Natasha Mathur
03 Oct 2018
3 min read
Mozilla announced a major update to Firefox Focus, a free and open-source privacy-focused browser by Mozilla for Android, iOS smartphones and tablets, yesterday. Firefox Focus allows you to stay “focused” by automatically blocking ads and trackers. The new updates explore new features, revamped design, and GeckoView Technology behind Firefox Focus for Android. New Features There are three new features added in Firefox Focus, namely, home screen tips, search suggestions, and Siri shortcut for iOS users. Home Screen tips It’s good news for people who like to keep tabs on feature releases. Now, you can see the core functionalities of Firefox Focus on the start screen. This will provide an overview of the whole range of possibilities that your privacy browser needs to offer. More than that, this new addition will not interrupt the usage at all and will get automatically refreshed after every click on the Erase button. All you need to do is just open the browser and a bunch of helpful feature recommendations will be presented in your preferred language on the start screen (Android). However, for iOS users, this feature is currently available only in the English language. Search Suggestions Search suggestions make the entire searching process more convenient. You can activate the feature by opening the app and select settings > “Search” > select the checkbox “Get search suggestions”. Now, keeping in mind the privacy of the Firefox Focus users, you don’t have to share what you’re typing in the address bar with your search provider. The search suggestions feature has been turned “off” by default, so you can choose whether or not you want to turn it on. Siri shortcuts for iOS users Siri shortcuts have been added to the updated Firefox Focus. You can now set and open a favorite website, erase and open Firefox, as well as erase in the background using Siri shortcuts. Mozilla’s aim with this was to improve the Firefox Focus experience for iOS users. Revamped Design Visual design for Firefox Focus browser has been completely optimized for the recently released Android Pie. New icons, a customized URL bar, and a simplified settings menu make Firefox Focus easier to use. It also offers a consistent user experience. New engine for Firefox Android users Focus has been using Android’s built-in WebView all this time, but it has some limitations. WebView is not designed for building browsers. To add next-generation privacy features to Focus, Focus developers require deep access to the browser internals. This is why, they have opted for their own engine, Gecko. Firefox Focus is now based on GeckoView, Mozilla’s own mobile engine, making it perfect for use in Android apps. GeckoView will help them leverage all of their Firefox expertise to build more compelling, safe, and robust online experiences. GeckoView will enable Mozilla to implement unique privacy-enhancing features in the future, such as reducing the potential of a third-party collection. For more information, check out the official Mozilla website. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature Firefox Nightly’s Secure DNS Experimental Results out
Read more
  • 0
  • 0
  • 13362

article-image-a-new-model-optimization-toolkit-for-tensorflow-can-make-models-3x-faster
Prasad Ramesh
19 Sep 2018
3 min read
Save for later

A new Model optimization Toolkit for TensorFlow can make models 3x faster

Prasad Ramesh
19 Sep 2018
3 min read
Yesterday, TensorFlow introduced a new model optimization toolkit. It is a suite of techniques that both new and experienced developers can leverage to optimize machine learning models. These optimization techniques are suitable for any TensorFlow model and will be particularly of use to developers running TensorFlow Lite. What is model optimization in TensorFlow? Support is added for post-training quantization to the TensorFlow Lite conversion tool. This can theoretically result in up to four times more compression in the data and up to three times faster execution for relevant machine learning models. On quantizing the models they work on, developers will also gain additional benefits of less power consumption. Enabling post-training quantization This quantization technique is integrated into the TensorFlow Lite conversion tool. Initiating is easy. After building a TensorFlow model, you can simple enable the ‘post_training_quantize’ flag in the TensorFlow Lite conversion tool. If the model is saved and stored in saved_model_dir, the quantized tflite flatbuffer can be generated. converter=tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir) converter.post_training_quantize=True tflite_quantized_model=converter.convert() open(“quantized_model.tflite”, “wb”).write(tflite_quantized_model) There is an illustrative tutorial that explains how to do this. To use this technique for deployment on platforms currently not supported by TensorFlow Lite, there are plans to incorporate it into general TensorFlow tooling as well. Post-training quantization benefits The benefits of this quantization technique include: Approx Four times reduction in model sizes. 10–50% faster execution in models consisting primarily of convolutional layers. Three times the speed for RNN-based models. Most models will also have lower power consumption due to reduced memory and computation requirements. The following graph shows model size reduction and execution time speed-ups for a few models measured on a Google Pixel 2 phone using a single core. We can see that the optimized models are almost four times smaller. Source: Tensorflow Blog The speed-up and model size reductions do not impact the accuracy much. The models that are already small to begin with, may experience more significant losses. Here’s a comparison: Source: Tensorflow Blog How does it work? Behind the scenes, optimizations are run by reducing the precision of the parameters (the neural network weights). The reduction is done from their training-time 32-bit floating-point representations to much smaller and efficient 8-bit integer representations. These optimizations ensure pairing the less precise operation definitions in the resulting model with kernel implementations that use a mix of fixed and floating-point math. This results into executing the heaviest computations quickly, but with lower precision. However, the most sensitive ones are still computed with high precision. This gives little accuracy losses. To know more about model optimization visit the TensorFlow website. What can we expect from TensorFlow 2.0? Understanding the TensorFlow data model [Tutorial] AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans
Read more
  • 0
  • 0
  • 13361

article-image-uber-open-sources-its-large-scale-metrics-platform-m3-for-prometheus
Savia Lobo
08 Aug 2018
4 min read
Save for later

Uber open sources its large scale metrics platform, M3 for Prometheus

Savia Lobo
08 Aug 2018
4 min read
Yesterday, Uber Inc.,  open-sourced its robust and scalable metrics infrastructure, M3 for Prometheus, a popular monitoring and alerting solution. Uber has been using M3 for a long time to access metrics on their backend systems. However, by open sourcing M3 as a remote storage backend for Prometheus, Uber wants others in the broader community to benefit from their metrics platform. Prior to releasing M3, Uber released M3DB, the scalable storage backend for M3. M3DB is a distributed time series database that can be used for storing real-time metrics at long retention Along with M3, Uber also open sourced M3 Coordinator, a bridge that users can deploy to access the benefits of M3DB and Prometheus. The M3 Coordinator performs downsampling, ad hoc retention, and aggregation of metrics using retention and rollup rules. This helps in applying specific retention and aggregations to subsets of metrics on the go. The rules of the process are stored in etcd, which runs embedded in the binary of an M3DB seed node. M3 for Prometheus Although Prometheus is a popular monitoring and alerting solution, its scalability and durability is limited by single nodes. The M3 metric platform provides a turnkey, scalable, and configurable multi-tenant store for Prometheus metrics. Source: Uber Engineering Uber, before using M3, emitted metrics to a Graphite stack, which stored them using the Whisper file format in a sharded Carbon cluster. Uber then made use of Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. However, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. Thus, this solution was not worth continuing as Uber kept expanding. This led them to build M3, a system which provides fault-tolerant metrics ingestion, storage, and querying as a managed platform. Released in the year 2015, M3 now houses over 6.6 billion time series. Features of M3 include: It optimizes every part of the metrics pipeline. This gives engineers an improved storage and results in lesser hardware usage. M3 ensures that the data is as highly compressed to reduce hardware footprint. This further optimizes Gorilla’s TSZ compression to compress float64 values, known as M3TSZ compression. Maintains a lean memory footprint for storage to avoid memory becoming a bottleneck since a significant portion of each data point can be “write once, read never.” To speed up access time, a Bloom filter and index summary per shard time window block in mmap’d memory is available. This allows ad-hoc queries of up to 100,000 unique time series in a single query over long retention periods (in some cases, spanning years of retention). With M3, one can avoid compactions where possible, including the downsampling path. This will further increase the utilization of host resources for more concurrent writes and provide steady write/read latency. One can also use a native design for time series storage that does not require vigilant operational attention to run with a high write volume. The M3 architecture The M3 architecture M3 architecture includes a single global view of all metrics With such a global view, upstream consumers need not navigate routing. This increases the overall simplicity of metrics discoverability. For workloads that failover applications between regions or workloads sharded across regions, the single global view makes it much easier to sum and query metrics across all regions in a single query. This lets users see all operations of a specific type globally, and look at a longer retention to view historical trends in a single place. How can one achieve the single global view? To achieve this single pane view, metrics are written in M3 to local regional M3DB instances. In this setup, replication is local to a region and can be configured to be isolated by availability zone or rack. Queries fan out to both the local region’s M3DB instances and coordinators in remote regions where metrics are stored, returning compressed M3TSZ blocks for matched time series wherever possible. Uber engineers plan to further upgrade M3 to push query aggregations to remote regions to execute before returning results, as well as to the local M3DB storage node wherever possible. Read more about M3 in detail in Uber Engineering official blog post. China’s Baidu launches Duer OS Prometheus Project to accelerate conversational AI Log monitoring tools for continuous security monitoring policy [Tutorial] Monitoring, Logging, and Troubleshooting
Read more
  • 0
  • 0
  • 13348

article-image-mozillas-firefox-send-is-now-publicly-available-as-an-encrypted-file-sharing-service
Bhagyashree R
13 Mar 2019
2 min read
Save for later

Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service

Bhagyashree R
13 Mar 2019
2 min read
Yesterday Mozilla announced Firefox Send to be publicly available, which initially was a “Test Pilot” experiment. Firefox Send is a free file sharing service that allows users to easily and securely share files with end-to-end encryption from any browser. By the end of this week, a beta version of its Android app will also be available to the users. How does Firefox Send work? Firefox Send is intended to be an alternative to email, where larger file attachments are not supported. Users do have cloud storage options like Google Drive and Dropbox, but these can be time-consuming in cases where we just need to share a single file for a limited amount of time. You can use the service by visiting the Firefox Send website, upload your file, and set an expiration period. Additionally, it also provides users an option to password protect their files before sending. You will then get a link that you can share with a recipient. Check out the following video to know how exactly it works: https://www.youtube.com/watch?v=eRHpEn2eHJA Firefox Send comes with various features and advantages Firefox Send maintains the security of your files by providing end-to-end encryption from the moment a file is sent until it is opened. With Firefox Send, you can share files of size up to 1 GB. If you want to share files of size up to 2.5 GB you need to sign up for a free Firefox account. For the file recipients, it is not compulsory to have a Firefox account to access the shared file. They just need to simply click on the received link and download the file. It puts control in the hands of a user by allowing them to choose when a file link gets expired, the number of times their file can be downloaded, and also allows adding an optional password. These features come in handy when you want to give the recipient only one-time or limited access to your files and hence ensures that your information is not available online indefinitely. To know more about Firefox Send, check out the Mozilla official announcement. Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust Common Voice: Mozilla’s largest voice dataset with approx 1400 hours of voice clips in 18 different languages  
Read more
  • 0
  • 0
  • 13346

article-image-hadoop-3-2-0-released-with-support-for-node-attributes-in-yarn-hadoop-submarine-and-more
Amrata Joshi
24 Jan 2019
3 min read
Save for later

Hadoop 3.2.0 released with support for node attributes in YARN, Hadoop submarine and more

Amrata Joshi
24 Jan 2019
3 min read
The team at Apache Hadoop released Apache Hadoop 3.2.0, an open source software platform for distributed storage and for processing of large data sets. This version is the first in the 3.2 release line and is not generally available or production ready, yet. What’s new in Hadoop 3.2.0? Node attributes support in YARN This release features Node Attributes that help in tagging multiple labels on the nodes based on their attributes. It further helps in placing the containers based on the expression of these labels. It is not associated with any queue and hence there is no need to queue resource planning and authorization for attributes. Hadoop submarine on YARN This release comes with Hadoop Submarine that enables data engineers for developing, training and deploying deep learning models in TensorFlow on the same Hadoop YARN cluster where data resides. It also allows jobs for accessing data/models in HDFS (Hadoop Distributed File System) and other storages. It supports user-specified Docker images and customized DNS name for roles such as tensorboard.$user.$domain:6006. Storage policy satisfier Storage policy satisfier supports HDFS applications to move the blocks between storage types as they set the storage policies on files/directories. It is also a solution for decoupling storage capacity from compute capacity. Enhanced S3A connector This release comes with support for an enhanced S3A connector, including better resilience to throttled AWS S3 and DynamoDB IO. ABFS filesystem connector It supports the latest Azure Datalake Gen2 Storage. Major improvements jdk1.7 profile has been removed from hadoop-annotations module. Redundant logging related to tags have been removed from configuration. ADLS connector has been updated to use the current SDK version (2.2.7). This release includes LocalizedResource size information in the NM download log for localization. This version of Apache Hadoop comes with ability to configure auxiliary services from HDFS-based JAR files. This release comes with the ability to specify user environment variables, individually. The debug messages in MetricsConfig.java have been improved. Capacity scheduler performance metrics have been added. This release comes with added support for node labels in opportunistic scheduling. Major bug fixes The issue with logging for split-dns multihome has been resolved. The snapshotted encryption zone information in this release is immutable. A shutdown routine has been added in HadoopExecutor for ensuring clean shutdown. Registry entries have been deleted from ZK on ServiceClient. The javadoc of package-info.java has been improved. NPE in AbstractSchedulerPlanFollower has been fixed. To know more about this release, check out the release notes on Hadoop’s official website. Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop? Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop Setting up Apache Druid in Hadoop for Data visualizations [Tutorial]
Read more
  • 0
  • 0
  • 13345
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-open-sources-ml-net-a-cross-platform-machine-learning-framework
Pravin Dhandre
10 May 2018
2 min read
Save for later

Microsoft Open Sources ML.NET, a cross-platform machine learning framework

Pravin Dhandre
10 May 2018
2 min read
Microsoft Corporation at its three day build conference held in Seattle, Washington announced the preview release of a machine learning framework called ML.NET. Developed by the research subsidiary, Microsoft Research, the framework will assist .NET developers in developing their own models for their web apps across Windows, Linux and macOS platform. Developers can infuse the custom machine learning models into applications without much prior experience in building machine learning models. The current release 0.1 is the debut preview compatible with any of the platforms that support .NET Core 2.0 or .NET Framework. Developers can access the framework directly from Github. Apart from the machine learning capabilities, this debut preview of ML.NET also uncovers draft of .NET APIs schemed for developing models for prediction, and training of machine learning models, different machine learning algorithms and core ML data structures. Although it is the first release, Microsoft and its team have been using this framework in their various product groups like Azure, Bing and Windows. Microsoft has also mentioned clearly that soon, ML.NET will include more advanced machine learning scenarios such as recommendation systems and anomaly detection. Popular concepts like deep learning, and support for libraries like TensorFlow, CNTK, and Caffe2 would be added. Support for general machine learning libraries like Accord.NET framework would also be included in the near soon release. The framework would also add miscellaneous support to ONNX, scaling out on Azure, Better GUI for ML tasks simplification and integration support with VS Tools. To follow the progress on this framework, visit .NET Blog on Microsoft’s official site. Azure meets Artificial Intelligence Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse  
Read more
  • 0
  • 0
  • 13344

article-image-ieee-spectrum-ibm-watson-has-a-long-way-to-go-before-it-becomes-an-efficient-ai-doctor
Natasha Mathur
09 Apr 2019
5 min read
Save for later

IEEE Spectrum: IBM Watson has a long way to go before it becomes an efficient AI doctor

Natasha Mathur
09 Apr 2019
5 min read
Eliza Strickland, the Senior Associate Editor at IEEE Spectrum, a magazine by the Institute of Electrical and Electronics Engineers, published an article, last week. The article talks about how IBM Watson still has a long way to go before it establishes itself as an efficient AI in the healthcare Industry. IBM Watson, a question-answering computer system that is capable of answering questions in natural language, made the hits back in February 2011 when it defeated two human champions in the game of Jeopardy! a popular American Quiz Show. This was also the time when IBM researchers explored the possibilities of extending Watson’s capabilities to ‘revolutionize’ health care.  IBM decided to apply the outstanding NLP capabilities of Watson to medicine and even promised a commercial product. The first time IBM showed off Watson’s potential to transform medicine using AI was back in 2014. For the Demo, Watson was fed a bizarre collection of patient symptoms, using which, it produced a list of possible diagnoses. Watson’s memory bank included information on even the rarest of diseases and its processors were totally unbiased in approach, giving it an edge over other AIs for doctors. “If Watson could bring that instant expertise to hospitals and clinics all around the world, it seemed possible that the AI could reduce diagnosis errors, optimize treatments, and even alleviate doctor shortages—not by replacing doctors but by helping them do their jobs faster and better,” writes Strickland. However, despite promising on new projects related to AI commercial products, it could not follow up on that promise. “In the eight years since, IBM has trumpeted many more high-profile efforts to develop AI-powered medical technology—many of which have fizzled and a few of which have failed spectacularly,” writes Strickland. Moreover, the products that have been produced from the IBM Watson Health Division, are more like basic AI assistants that are capable of performing routine tasks, not even close to being an AI doctor. Challenges faced by Watson in the healthcare industry While IBM was considering Watson’s possibilities in the healthcare industry, the most challenging issue at the time was the fact that the bulk of patient data in Medicine, i.e. unstructured data. This includes doctor’s notes and hospital discharge summaries which accounts for about 80 percent of a typical patient’s record and is an amalgamation of jargon, shorthand, and subjective statements. Another challenge faced by IBM Watson is its diagnosis of cancer. Mark Kris, a lung cancer specialist at Memorial Sloan Kettering Cancer Center, New York City along with the other preeminent physicians trained an AI system known as Watson for Oncology in 2015. Watson for Oncology would learn by ingesting the vast medical literature on cancer and the health records of real cancer patients and uncover patterns unknown to humans. The other preeminent physicians at the University of Texas MD Anderson Cancer Center, in Houston, collaborated with IBM to create a tool called Oncology Expert Advisor. Both the products, however, faced severe criticism saying that Watson for Oncology at times provided ‘useless’ and ‘dangerous recommendations’. “A deeper look at these two projects reveals a fundamental mismatch between the promise of machine learning and the reality of medical care—between “real AI” and the requirements of a functional product for today’s doctors”, writes Strickland. Although Watson learned quickly about scanning articles on clinical studies, it was difficult to teach Watson to read the articles the way a doctor would. “The information that physicians extract from an article, that they use to change their care, may not be the major point of the study. Watson’s thinking is based on statistics, so all it can do is gather statistics about main outcomes”, adds Mark Kris. Researchers further found that Watson was also incapable of mining information from patients’ electronic health records. Also, they realized that Watson is incapacitated when it comes to comparing a new patient with another large number of cancer patients to discover hidden patterns. Further, they hoped that Watson would mimic the abilities of expert oncologists, but they were disappointed. Despite some challenges, IBM Watson has also seen its share of success stories. Strickland cites an example of Watson for Genomics developed in partnership with the University of North Carolina, Yale University, and other renowned institutions. The tool helps genetics lab generate reports for practicing oncologists. Watson ingests lists of patient’s genetic mutations and generates a report describing all the relevant drugs and clinical trials in just a few seconds. Moreover, a paper was published by IBM’s partners at the University of North Carolina on the effectiveness of Watson for Genomics in 2017 Effective or not, IBM Watson still has a long queue of hurdles that it needs to cross before IBM reaches its dream of making Watson the impeccable ‘AI doctor’. For more information, check out the official IEEE Spectrum article. IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI IBM sued by former employees on violating age discrimination laws in workplace IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments
Read more
  • 0
  • 0
  • 13342

article-image-stanford-university-launches-institute-of-human-centered-artificial-intelligence-receives-public-backlash-for-non-representative-faculty-makeup
Fatema Patrawala
22 Mar 2019
5 min read
Save for later

Stanford University launches Institute of Human Centered Artificial Intelligence; receives public backlash for non-representative faculty makeup

Fatema Patrawala
22 Mar 2019
5 min read
On Monday, Stanford University launched the new Institute for Human-Centered Artificial Intelligence (HAI) to augment humanity with AI. The institute aims to study, guide and develop human-centered artificial intelligence technologies and applications and advance the goal of a better future for humanity through AI, according to the announcement. Its co-leaders are John Etchemendy professor of philosophy and a former Stanford University provost, and Fei-Fei Li, who is a computer science professor and a former Chief Scientist for Google Cloud AI and ML. “So much of the discussion about AI is focused narrowly around engineering and algorithms... We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.” Li explains in a blog post. The institute was launched at a symposium on campus, and it will include faculty members from all seven schools at Stanford — including the School of Medicine — and will work closely with companies in a variety of sectors, including health care, and with organizations such as AI4All. "Its biggest role will be to reach out to the global AI community, including universities, companies, governments and civil society to help forecast and address issues that arise as this technology is rolled out," said Etchemendy, in the announcement. "We do not believe we have answers to the many difficult questions raised by AI, but we are committed to convening the key stakeholders in an informed, fact-based quest to find those answers." The symposium featured a star-studded speaker lineup that included industry titans Bill Gates, Reid Hoffman, Demis Hassabis, and Jeff Dean, as well as dozens of professors in fields as diverse as philosophy and neuroscience. Even California Governor, Gavin Newsom made an appearance, giving the final keynote speech. As the audience of the event included former Secretaries of State Henry Kissinger and George Shultz, former Yahoo CEO Marissa Mayer, and Instagram Co-founder Mike Krieger. Any AI initiative that government, academia, and industry all jointly support is good news for the future of the tech field. HAI differs from many other AI efforts in that its goal is not to create AI rivaling humans in intelligence, but rather to find ways where AI can augment human capabilities and enhance human productivity and quality of life. If you missed the event, you can view a video recording here. Institute aims to become a representative of humanity but ends up being claimed as exclusionary While the Institute’s mission stated “The creators and designers of AI must be broadly representative of humanity.” It has been noticed that the institute has 121 faculty members listed on their website, and not a single member of Stanford’s new AI faculty is black. https://twitter.com/chadloder/status/1108588849503109120 There were questions as to why so many of the most influential people in the Valley decided to align with this center and publicly support it, and why this center aims to raise $1 billion to further its efforts. What does this center offer such a powerful group of people? https://twitter.com/annaeveryday/status/1108594937145114625 The moment such comments were made on Twitter the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy. Bidadanure was not listed among the institute’s staff prior, according to a version of the page preserved on the Internet Archive’s Wayback Machine, and Juliana also spoke at the institute’s opening event. It is imperative to say that we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women. We closely know about Google and Facebook’s algorithms deciding on what information we see and which conspiracy theory YouTube serves up next. But the algorithms making those decisions are closely guarded company secrets with global impact. In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway? When a group of mostly white engineers gets together to build these systems, the impact on marginalized groups is particularly stark. Algorithms can reinforce racism in domains like housing and policing. Recently Facebook announced that the platform has removed targeting ads related to protected classes such as race, ethnicity, sexual orientation, and religion. Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets its trained on. Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us. The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got miles to go. Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request
Read more
  • 0
  • 0
  • 13337

article-image-per-the-new-gdc-2019-report-nearly-50-of-game-developers-think-game-industry-workers-should-unionize
Sugandha Lahoti
25 Jan 2019
2 min read
Save for later

Per the new GDC 2019 report, nearly 50% of game developers think game industry workers should unionize

Sugandha Lahoti
25 Jan 2019
2 min read
The 2019 Game Developers Conference published the results of their seventh annual state of the industry survey two days ago. The report added two new questions this time, should the games industry unionize? and will the games industry unionize?. Almost 4000 developers participated in the survey and nearly 50% of them believed that game industry workers should unionize. GDC 2019 is scheduled to take place March 18-22 at the Moscone Convention Center in San Francisco, California. 47 percent gamers said yes when asked whether they thought game industry workers should unionize; 26 percent said maybe, 16 percent said no, and 11 percent said they didn’t know. However, figures fell when it came to actual implementation. When asked whether they think video game workers actually will unionize, only 21 percent said yes. 39 percent said “maybe.” 24 percent of respondents said they don’t think it will happen, and 15 percent said that they don’t know. One developer responded saying, “There is too much supply: too many people want into the industry. Those who unionize will be shoved out of the way as companies hire those with fewer demands." The gaming industry has been abuzz with talks of unionization for quite some time now. Game Workers Unite, is a democratic organization fighting for union rights and gathering support for unionization in the gaming industry. Last year, in December, a UK chapter of Game Workers Unite became a legal trade union. Developers were also asked what PC/Mac game storefronts they sell their games on. The most popular answer was Steam, with roughly 47 percent saying that they sell games on Valve’s storefront. However, when asked if Steam still justifies its 30 percent cut, only 6 percent said yes, and 17 percent said maybe. On asking developers (both indie and professionals) how many hours they work per week on average, 44 percent of developers worked more than 40 hours per week. Almost 6 percent worked 76 to 80 hours, "suggesting that deadline-related crunch can go far beyond normal working hours," according to the survey. Other questions were based on what game platforms developers are creating for, iOS Android etc. The entire survey is available to be downloaded for free. Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI-powered game development platform Packt partners with Humble Bundle to bring readers a stash of game development content Best game engines for Artificial Intelligence game development
Read more
  • 0
  • 0
  • 13329
article-image-mozilla-disables-the-by-default-adobe-flash-plugin-support-in-firefox-nightly-69
Bhagyashree R
15 Jan 2019
2 min read
Save for later

Mozilla disables the by default Adobe Flash plugin support in Firefox Nightly 69

Bhagyashree R
15 Jan 2019
2 min read
Yesterday, the Firefox team disabled the Adobe Flash plugin by default in Firefox Nightly 69, which will be eventually deprecated as per Mozilla’s Plugin Roadmap for Firefox. Users can still activate Flash on certain sites if they want to, through the browser settings. Flash support will be completely removed from the consumer versions of Firefox by early 2020. The Firefox Extended Support Release (ESR) will continue to support Flash till its end-of-life in 2020. Why Mozilla has decided to disable Adobe Flash? In recent years, we have seen a huge growth in web open standards like HTML5, WebGL, and WebAssembly. These technologies now come with various capabilities and functionalities for which we used to have plugins. Now, browser vendors prefer to integrate these capabilities directly into browsers and deprecate plugins. Hence, back in 2017, Adobe announced that along with their technology partners, Google, Mozilla, Apple, Microsoft, and Facebook, it is planning to end-of-life Flash. It also added that by the end of 2020, it will stop updating and distributing the Flash Player and encouraged content creators to migrate any of their content which is in Flash format into new open formats. Following this all the five partners announced their plan of action. Apple already did not supported Flash on iPhone, iPad, and iPod. For Mac users, Flash did not come pre-installed since 2010 and it was by default off if users decided to install it. Facebook announced that they are supporting game developers to migrate their Flash games to HTML5. Google will disable Flash by default in Chrome and remove it completely by the end of 2020. Microsoft also announced that they will phase out Flash from Microsoft Edge and Internet Explorer, eventually leading to the removal of Flash from Windows entirely by the end of 2020. Mozilla releases Firefox 64 and Firefox 65 beta Mozilla shares why Firefox 63 supports Web Components Introducing Firefox Sync centered around user privacy
Read more
  • 0
  • 0
  • 13328

article-image-google-dart-2-1-released-with-improved-performance-and-usability
Prasad Ramesh
16 Nov 2018
3 min read
Save for later

Google Dart 2.1 released with improved performance and usability

Prasad Ramesh
16 Nov 2018
3 min read
Dart 2.1, an increment over Dart 2 is released with changes for performance and usability. New features in Dart 2.1 include smaller code size, faster type checks, better usability for type errors, and new language features to improve productivity. Support for int-to-double conversion in Dart 2.1 Developers new to Flutter often face obstacles in the form of analysis errors when specifying padding, setting font sizes, etc. These errors make sense from a system point of view. For example, the API expects a type say, a double, and the developer specifies the value in a different type say, an int. From a usability point of view, it seems foolish as there is a trivial conversion from int to double. So, Dart 2.1 now infers the line where applicable and silently evaluates an int as a double. Language support for mixins in Dart 2.1 Dart 2.1 comes with a new syntax for mixins. It features a new mixin keyword that can be used to define classes which can exclusively be used as mixins. Support is also added so that mixins can extend other classes and invoke methods in their superclass. Previously, mixins could only extend Object. An example of extending non-Object classes is from Flutter’s animation APIs. The SingleTickerProviderStateMixin which is a framework class that provides a ticker for advancing an animation by a single frame declares a mixin that implements the general TickerProvider interface. The animations are applicable only to stateful widgets since the position in the animation is considered state. The new mixin support in Dart 2.1 allows expressing this by declaring that only classes that extend the Flutter State class can use the mixin. Compile-time type checks The type system in Dart 2 protects users during development, indicating violations of the contract specified by the types. Such checks at edit time were added in Dart 2 powered by the Dart Analyzer. There is another place where developers might expect type checks. Like compile time, when performing a Flutter release build. Such checks were incomplete in Dart 2, which potentially leading to usability issues where bad source code could compile without errors. In Dart 2.1, these checks are complete. Now, the Analyzer and Dart compiler both contain the same checks. Performance improvements for Flutter developers In a few cases, though, the comprehensive checks in Dart 2 caused undesirable overheads of 20–40%. Dart 2.1 has greatly reduced cost of type checks. This is applicable to both, AOT-compiled code and code run in the VM with JIT compilation. The developer tools which run using the VM, benefit from this. Performing code analysis of one large benchmark app used to take about 41 seconds, it now takes only around 25 seconds. Performance improvements for web developers The code size and compile time is also improved for Dart code running on the web. The output size of dart2js were in focus. This yielded good results, such as a 17% reduction in minified output size and a 15% improvement in compilation time in one of the tested samples. Other changes There are also some changes made outside the core Dart SDK. One is protocol buffers (protobuf) which are a platform-neutral mechanism for serializing structured data. Dart is now an officially protobuf supported language. They have created a small sample for knative—a platform based on Kubernetes for building, deploying, and managing serverless workloads. For more details, visit the Dart Blog post. Google’s Dart hits version 2.0 with major changes for developers C# 8.0 to have async streams, recursive patterns and more Golang just celebrated its ninth anniversary
Read more
  • 0
  • 0
  • 13310

article-image-developers-rejoice-github-announces-github-actions-github-connect-and-much-more-to-improve-development-workflows
Melisha Dsouza
17 Oct 2018
5 min read
Save for later

Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows

Melisha Dsouza
17 Oct 2018
5 min read
Yesterday, at the GitHub Universe annual developer conference held at San Francisco, the team announced a host of new changes to help developers manage and improve their development workflow. GitHub has been used by 31 million developers in the past year and is the most trusted code hosting platform. It received numerous support from developers all over the globe and the team has decided to appreciate this support by making life easier for developers. Their new upgrades include: Github Actions will help developers automate workflows and build while sharing and executing code inside containers on GitHub. GitHub Connect for facilitating a unified business identity, unified search, and unified contributions. Powerful security tools with the GitHub Security Advisory API Improvements to the GitHub learning lab Let’s look at these updates in depth: #1 GitHub Actions "A lot of the major clouds have built products for sysadmins and not really for developers, and we want to hand power and flexibility back to the developer and give them the opportunity to pick the tools they want, configure them seamlessly, and then stand on the shoulders of the giants in the community around them on the GitHub platform" -GitHub head of platform Sam Lambert ( in an interview to Venture Beat) Software development demands that a project is broken down into hundreds, if not thousands of small steps (depending on the scope of the project) to get the job done faster and efficiently. This means that at every stage of development, teams need to coordinate to understand the progress of each step. Teams need to work concurrently and ensure that their actions don’t overlap or overwrite changes made by other members. Many companies perform these checks manually, using different development tools which takes up a lot of time and effort. Enter Github Actions. This new feature uses code packaged in a Docker container running on GitHub’s servers. Users can set up triggers for events. For instance, introducing new code to a project or packaging an NPM module or sending an SMS  alert. This trigger will set off Actions to take further steps defined by criteria set by administrators. Besides automating tasks, GitHub Actions allows users to connect and share containers to run their software development workflow. They can easily build, package, release, update, and deploy their project in any language, without having to run code themselves. Developer, Team, and Business Cloud plans can use Actions that are available in limited public beta on GitHub. #2 Github Connect "GitHub Connect begins to break down organizational barriers, unify the experience across deployment types, and bring the power of the world’s largest open-source community to developers at work." -Jason Warner, GitHub’s senior vice president of technology. The team has announced that GitHub Connect is now generally available. GitHub Connect comes with new features like unified search, unified business identity, and unified collaborations.  Unified search can search through both the open source code on the site as well as internal code. When searching from GitHub Enterprise instance, users can view search results from public content on GitHub.com The Unified Business Identity feature allows administrators to easily manage user accounts existing across separate Business Cloud installations. Using a single back-end interface, businesses can improve billing, licensing, permissions and policy operations. Many developers come across the issue wherein their contributions are locked behind the firewalls of private companies. Unified contributions, lets developers get credit for the work they’ve done on repositories for businesses in the past. #3 Better Security The new GitHub Security Advisory API, automates vulnerability scans and makes it easier for developers to find threats in their code. GitHub Vulnerability Alert now supports .NET and Java and developers who use these languages will get a heads-up if any dependent code has a security exploit. GitHub will now also start scanning all public repositories for known token formats and developers who accidentally put their security tokens into public code can be at rest. On finding a known token, the team will alert the token provider to validate the commit and contact the account owner to issue a new token. From automating detection and remediation to tracking emergent security vulnerabilities, looks like the team is going all out to improve its security functionalities! #4 The GitHub Learning Lab GitHub Learning Lab helps developers get started with GitHub, manage merge conflicts, contribute to their first open source project, and more. The team announced three new Learning Lab courses-  covering secure development workflows with GitHub, reviewing a pull request, and getting started with GitHub Apps. These courses will be made available to everyone. Developers can create private courses and learning paths, customize course content, and access administrative reports and metrics with the Learning lab. The announcements have caused a buzz among developers on Twitter: https://twitter.com/fatih/status/1052238735755173888 https://twitter.com/sarah_edo/status/1052247186220568577 https://twitter.com/jmsaucier/status/1052322249372590081 It would be interesting to see how these updates shape the use of GitHub in the future. To know more about the announcement, head over to GitHub’s official Blog. GitHub is bringing back Game Off, its sixth annual game building competition, in November RawGit, the project that made sharing and testing code on GitHub easy, is shutting down! GitHub comes to your code Editor; GitHub security alerts now have machine intelligence  
Read more
  • 0
  • 0
  • 13309
article-image-f5-networks-is-acquiring-nginx-a-popular-web-server-software-for-670-million
Bhagyashree R
12 Mar 2019
3 min read
Save for later

F5 Networks is acquiring NGINX, a popular web server software for $670 million

Bhagyashree R
12 Mar 2019
3 min read
Yesterday, F5 Networks, the company that offers businesses cloud and security application services, announced that it is set to acquire NGNIX, the company behind the popular open-source web server software, for approximately $670 million. These two companies are coming together to provide their customers with consistent application services across every environment. F5 has been seeing some stall in its growth lately given that its last quarterly earnings have only shown a 4% growth compared to the year before. On the other hand, NGINX continues to show a 100 percent year-on-year growth since 2014. The company currently boasts of 375 million users with about 1,500 customers for its paid services like support, load balancing, and API gateway and analytics. This acquisition will enable F5 to accelerate  ‘time to market’ of its services to customers for building modern applications. F5 plans to enhance the current offerings by NGINX using its security solutions and will also be integrating its cloud-native innovations with NGINX’s load balancing technology. Along with these advancements, F5 will help scale NGINX selling opportunities using its global sales force, channel infrastructure, and partner ecosystem. François Locoh-Donou, President and CEO of F5, sharing his vision behind acquiring NGINX said, “F5’s acquisition of NGINX strengthens our growth trajectory by accelerating our software and multi-cloud transformation”. He adds, “By bringing F5’s world-class application security and rich application services portfolio for improving performance, availability, and management together with NGINX’s leading software application delivery and API management solutions, unparalleled credibility and brand recognition in the DevOps community, and massive open source user base, we bridge the divide between NetOps and DevOps with consistent application services across an enterprise’s multi-cloud environment.” NGINX’s open source community was also a major factor behind this acquisition. F5 will continue investing in the NGINX open source project as open source is a core part of its multi-cloud strategy. F5 expects that this will help it accelerate product integrations with leading open source projects and open doors for more partnership opportunities. Gus Robertson, CEO of NGINX, Inc, said, “NGINX and F5 share the same mission and vision. We both believe applications are at the heart of driving digital transformation. And we both believe that an end-to-end application infrastructure—one that spans from code to customer—is needed to deliver apps across a multi-cloud environment.” The acquisition is now approved by the boards of directors of both F5 and NGINX and is expected to close in the second calendar quarter of 2019. Once the acquisition is complete, the NGINX founders, Gus Robertson, Igor Sysgoev, and Maxim Konovalov will be joining F5 Networks. To know more in detail, check out the announcement by F5 Networks. Now you can run nginx on Wasmjit on all POSIX systems Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack  
Read more
  • 0
  • 0
  • 13308

article-image-100-million-grant-for-the-web-web-monetization-mozilla-coil-creative-commons
Sugandha Lahoti
17 Sep 2019
3 min read
Save for later

$100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons

Sugandha Lahoti
17 Sep 2019
3 min read
Coil, Mozilla and Creative Commons are launching a major $100 million ‘Grant for the Web’ to award people who help develop best web monetization practices. The Grant will give roughly $20 million per year for five years to content sites, open-source infrastructure developers, and independent creators that contribute to a ‘privacy-centric, open, and accessible web monetization ecosystem’. This is a great initiative to move the workings of the internet from an ad-focused business model to a new privacy-focused internet. Grant for the Web is primarily funded by Coil a content-monetization company, with Mozilla and Creative Commons as founding collaborators. Coil is known for developing Interledger and Web Monetization as the first comprehensive set of open standards for monetizing content on the Web. Web Monetization allows users to reward creators on the Web without having to rely on one particular company, currency, or payment platform. Read Also:  Mozilla announces a subscription-based service for providing ad-free content to users Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Coil cited a number of issues in the internet domain such as privacy abuses related to ads, demonetization to appease advertisers, unethical sponsored content, large platforms abusing their market power. “All of these issues can be traced back to one simple problem,” says Coil, “browsers don’t pay”. This forces sites to raise funds through workarounds like ads, data trafficking, sponsored content, and site-by-site subscriptions. In order to demote these activities, Coil will now grant money to people interested in experimenting with Web Monetization as a more user-friendly, privacy-preserving way to make money. Award amounts will vary from small to large ($1,000-$100,000), depending on the scope of the project. The majority of the grant money (at least 50%) will go to openly-licensed software and content. Special focus will be given to people who promote diversity and inclusion on the internet, and for communities and individuals that have historically been marginalized, disadvantaged, or without access. Awardees will be approved by an Advisory Council initially made up of representatives from Coil, Mozilla, and Creative Commons. “The business models of the web are broken and toxic, and we need to identify new ways to support creators and to reward creativity,” says Ryan Merkley, CEO of Creative Commons, in a statement. “Creative Commons is unlikely to invent these solutions on its own, but we can partner with good community actors who want to build things that are in line with our values. Mark Surman, Mozilla’s executive director said, “In the current web ecosystem, big platforms and invasive, targeted advertising make the rules and the profit. Consumers lose out, too — they unwittingly relinquish reams of personal data when browsing content. That’s the whole idea behind ‘surveillance capitalism.’ Our goal in joining Grant for the Web is to support a new vision of the future. One where creators and consumers can thrive.” Coil CEO, Stefan Thomas is aware of the hurdles. "The grant is structured to run over five years because we think that's enough time to get to a tipping point where this either becomes a viable ecosystem or not," he said. "If it does happen, one of the nice things about this ecosystem is that it tends to attract more momentum." Check out grantfortheweb.org and join the Community Forum to ask questions and learn more. Next up in Privacy Google open sources their differential privacy library to help protect user’s private data Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals. How Data Privacy awareness is changing how companies do business
Read more
  • 0
  • 0
  • 13307
Modal Close icon
Modal Close icon