Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-google-cloud-firestore-the-serverless-nosql-document-database-is-now-generally-available
Sugandha Lahoti
01 Feb 2019
2 min read
Save for later

Google Cloud Firestore, the serverless, NoSQL document database, is now generally available

Sugandha Lahoti
01 Feb 2019
2 min read
Google’s Cloud Firestore is Google’s serverless NoSQL document database used for storing, syncing, and querying data for web, mobile, and IoT applications. It is integrated with both Google Cloud Platform (GCP) and Firebase, Google’s mobile development platform. It is now generally available. Apart from this, Cloud Firestore is now available in 10 new locations, making the total region count as 13 with a significant price reduction for regional instances. Firestore had a single location when it was launched and added two more during the beta. Cloud Firestore is now available in 13 regions When in beta, Cloud Firestore allowed developers to only use multi-region instances, which were sometimes more expensive and not required by every app. With this launch, Google is giving developers the option to run their databases in a single region. There is a significant price reduction with as low as 50% of multi-region instance prices. New Cloud Firestore pricing takes effect from March 3, 2019, for most regional instances. Cloud Firestore’s SLA (Service Level Agreement) is also available. 99.999% is available for multi-region instances and 99.99% is available for regional instances. With Stackdriver integration (in beta), Cloud Firestore users can monitor read, write and delete operations in near-real time with a new "Usage" tab in the Firebase console. For the next release, Google is working on adding new features including querying for documents across collections and incrementing database values without needing a transaction. Existing Cloud Datastore users will be live-upgraded to Cloud Firestore automatically later in 2019. Netizens are generally happy about this release. https://twitter.com/puf/status/1091030237117206529 A comment on hacker news reads, “Been loving Firestore! It has been my first real experience w/ NoSQL in an MVP to production-ready quickly. It's been SO easy to experiment with and learn. Community has been great.” Google Cloud releases a beta version of SparkR job types in Cloud Dataproc 4 key benefits of using Firebase for mobile app development Build powerful progressive web apps with Firebase
Read more
  • 0
  • 0
  • 15911

article-image-google-expands-its-blockchain-search-tools-adds-six-new-cryptocurrencies-in-bigquery-public-datasets
Sugandha Lahoti
07 Feb 2019
2 min read
Save for later

Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets

Sugandha Lahoti
07 Feb 2019
2 min read
Google’s BigQuery Public Datasets program, has added six new cryptocurrencies to expand it’s blockchain search tools. Including Bitcoin and Ethereum which were added last year, the total count is now eight. The six new cryptocurrency blockchain datasets are Bitcoin Cash, Dash, Dogecoin, Ethereum Classic, Litecoin, and Zcash. BigQuery Public dataset is stored in BigQuery and made available to the general public through the Google Cloud Public Dataset Program. The blockchain related datasets consist of the blockchain’s transaction history to help developers better understand cryptocurrency. Apart from adding new datasets, Google has released a set of queries and views that map all blockchain datasets to a double-entry book data structure that enables multi-chain meta-analyses, as well as integration with conventional financial record processing systems. A Blockchain ETL ingestion framework helps to update all datasets every 24 hours via a common codebase. This results in a higher latency for loading Bitcoin blocks into BigQuery. It also leads to ingesting additional BigQuery datasets with less effort. It also means that a low-latency loading solution can be implemented once and can be used to enable real-time streaming transactions for all blockchains. With this release, the blockchain data sets have been standardized into a "unified schema," meaning the data is structured in a uniform, easy-to-access way. They’ve also included more data, such as script op-codes. Having these scripts available for Bitcoin-like datasets enables more advanced analyses. They have also created some views that abstract the blockchain ledger to be presented as a double-entry accounting ledger. This helps to further interoperate with Ethereum and ERC-20 token transactions. Allen Day, Cloud Developer Advocate, Google Cloud Health AI, writes in a blog post, “ We hope these new public datasets encourage you to try out BigQuery and BigQuery ML for yourself. Or, if you run your own enterprise-focused blockchain, these datasets and sample queries can guide you as you form your own blockchain analytics.” Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Stable version of OpenZeppelin 2.0, a framework for smart blockchain contracts, released! Is Blockchain a failing trend or can it build a better world? Harish Garg provides his insight.
Read more
  • 0
  • 0
  • 15894

article-image-plotly-releases-dash-daq-a-ui-component-library-for-data-acquisition-in-python
Natasha Mathur
02 Aug 2018
2 min read
Save for later

Plotly releases Dash DAQ: a UI component library for data acquisition in Python

Natasha Mathur
02 Aug 2018
2 min read
Plotly released Dash DAQ, a modern UI component library, which helps with data acquisition in Python, earlier this week. A data acquisition system (DAQ) helps collect, store, and distribute information. Dash DAQ is built on top of Plotly’s Dash (a Python framework used for building analytical web applications without requiring the use of JavaScript). Dash DAQ consists of 16 components. These components are used for building user interfaces that are capable of controlling and reading scientific instruments. To know more about each of their usage and configuration options, check out the official Dash DAQ components page. You can use Dash DAQ with Python drivers which are provided by instrument vendors. Alternatively, you can also write your own drivers with PySerial, PyUSB, or PyVISA. Dash DAQ is priced at $1980 as it is built with research labs in mind and is not suited currently for general python users. To install Dash DAQ, you have to purchase it first. After you make the purchase, a download page will automatically appear via which you can download it. Only one Dash DAQ library is allotted per developer. Here are the installation steps as mentioned in the official Dash DAQ installation page. Multiple apps of different variety have already been made using Dash DAQ. Here are some of the examples: Wireless Arduino Robot in Python, an app that wirelessly controls Sparki, an Arduino-based robot. Dash DAQ. Using Dash DAQ  for this app gives it clean, intuitive and virtual controls to build GUIs for your hardware. Robotic Arm in Python, an app that allows you to operate Robotic Arm Edge. Dash DAQ’s GUI components allow you to interface with all the robot’s motors and LED. Users can even do it via their mobile device, thereby enjoying the experience of a real remote control! Ocean Optics Spectrometer in Python, an app which allows users to interface with an Ocean Optics spectrometer. Here Dash DAQ offers interactive UI components which are written in Python allowing you to read and control the instrument in real-time. Apart from these few examples, there are a lot more applications that the developers at Plotly have built using Dash DAQ. plotly.py 3.0 releases 15 Useful Python Libraries to make your Data Science tasks Easier  
Read more
  • 0
  • 0
  • 15889

article-image-microsoft-announces-the-first-public-preview-of-sql-server-2019-at-ignite-2018
Amey Varangaonkar
25 Sep 2018
2 min read
Save for later

Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018

Amey Varangaonkar
25 Sep 2018
2 min read
Microsoft made several key announcements at their Ignite 2018 event, which began yesterday in Orlando, Florida. The biggest announcement of them all was the public preview availability of SQL Server 2019. With this new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. What we can expect in SQL Server 2019 Microsoft SQL Server 2019 will run either on-premise, or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB and Teradata SQL Server 2019 will get built-in support for popular Open Source Big Data processing frameworks such as Apache Spark and Apache Hadoop SQL Server 2019 will have smart machine learning capabilities with support for SQL Server Machine Learning services and Spark Machine Learning Microsoft also announced support for Big Data clusters managed through Kubernetes - the Google-incubated container orchestration system With organizations slowly moving their operations to the cloud, Microsoft seems to have hit the jackpot with the integration of SQL Server and Azure services. Microsoft has claimed businesses can save upto 80% of their operational costs by moving their SQL database to Azure. Also, given the rising importance of handling Big Data workloads efficiently, SQL Server 2019 will now be able to ingest, process and analyze Big Data on its own with built-in capabilities of Apache Spark and Hadoop - the world’s leading Big Data processing frameworks. Although Microsoft hasn’t hinted at the official release date yet, it is expected that SQL Server 2019 will be generally available in the next 3-5 months. Of course, the duration can be extended or accelerated depending on the feedback received from the tool’s early adopters. You can try the public preview of SQL Server 2019 by downloading it from the official Microsoft website. Read more Microsoft announces the release of SSMS, SQL Server Management Studio 17.6 New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Troubleshooting in SQL Server
Read more
  • 0
  • 0
  • 15889

article-image-deepmind-artificial-intelligence-can-spot-over-50-sight-threatening-eye-diseases-with-expert-accuracy
Sugandha Lahoti
14 Aug 2018
3 min read
Save for later

DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy

Sugandha Lahoti
14 Aug 2018
3 min read
DeepMind Health division has achieved a major milestone by developing an artificial intelligence system that can detect over 50 sight-threatening eye diseases with the accuracy of an expert doctor. This system can quickly interpret eye scans and correctly recommend how patients should be referred for treatment. It is the result of a collaboration with Moorfields Eye Hospital; the partnership was announced in 2016 to jointly address some of the current eye conditions. How Artificial Intelligence beats current OCT scanners Currently, eyecare doctors use optical coherence tomography (OCT) scans to help diagnose eye conditions. OCT scans are often hard to read and require time to be interpreted by experts. The time required can cause long delays between scan and treatment, which can be troublesome if someone needs urgent care. Deepmind’s AI system can automatically detect the features of eye diseases within seconds. It can also prioritize patients by recommending whether they should be referred for treatment urgently. System architecture The system uses an easily interpretable representation sandwiched between two different neural networks. The first neural network, known as the segmentation network, analyses the OCT scan and provides a map of the different types of eye tissue and the features of the disease it observes. The second network, known as the classification network, analyses the map to present eyecare professionals with diagnoses and a referral recommendation. The system expresses the referral recommendation as a percentage, allowing clinicians to assess the system’s confidence. AI-powered dataset DeepMind has also developed one of the best AI-ready databases for eye research in the world. The original dataset held by Moorfields was suitable for clinical use, but not for machine learning research. The improved database is a non-commercial public asset owned by Moorfield. It is currently being used by hospital researchers for nine separate studies into a wide range of conditions. DeepMind’s initial research is yet to turn into a usable product and then undergo rigorous clinical trials and regulatory approval before being used in practice. Once validated for general use, the system would be used for free across all 30 of Moorfields’ UK hospitals and community clinics, for an initial period of five years. You can read more about the announcement on the DeepMind Health blog. You can also read the paper on Nature Medicine. Reinforcement learning optimizes brain cancer treatment to improve patient quality of life. AI beats Chinese doctors in a tumor diagnosis competition. 23andMe shares 5mn client genetic data with GSK for drug target discovery
Read more
  • 0
  • 0
  • 15876

article-image-unity-releases-ml-agents-v0-3-imitation-learning-memory-enhanced-agents
Sugandha Lahoti
16 Mar 2018
2 min read
Save for later

Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more

Sugandha Lahoti
16 Mar 2018
2 min read
The Unity team has released the version 0.3 of their anticipated toolkit ML-Agents. The new release is jam-packed with features on the likes of Imitation Learning, Multi-Brain Training, On-Demand Decision-Making, and Memory-Enhanced Agents. Here’s a quick look at what each of these features brings to the table: Behavioral cloning, an imitation learning algorithm ML-Agents v0.3 uses imitation learning for training agents. Imitation Learning uses demonstrations of the desired behavior in order to provide a learning signal to the agents. For v0.3, the team uses Behavioral Cloning as the choice of imitation learning algorithm. This works by collecting training data from a teacher agent, and then simply using it to directly learn a behavior. Multi-Brain training Using Multi-Brain Training, one can train more than one brain at a time, with their separate observation and action space. At the end of training, there is only one binary (.bytes) file, which contains one neural network model per brain. On-Demand Decision-Making Agents ask for decisions in an on-demand fashion, rather than making decisions every step or every few steps of the engine. Users can enable and disable On-Demand Decision-Making for each agent independently with the click of a button! Learning under partial observability The unity team has included two methods for dealing with partial observability within learning environments through Memory-Enhanced Agents. The first memory enhancement is Observation-Stacking. This allows an agent to keep track of up to the past ten previous observations within an episode, and to feed them all to the brain for decision-making. The second form of memory is the inclusion of an optional recurrent layer for the neural network being trained. These Recurrent Neural Networks (RNNs) have the ability to learn to keep track of important information over time in a hidden state. Apart from these features, there is an addition of a Docker-Image, changes to API Semantics and a major revamp of the documentation. All this to make setup and usage simpler and more intuitive.  Users can check the GitHub page to download the new version and learn all the details on the release page.
Read more
  • 0
  • 0
  • 15872
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chinas-facial-recognition-powered-airport-kiosks-an-attempt-to-invade-privacy
Fatema Patrawala
26 Mar 2019
7 min read
Save for later

Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?

Fatema Patrawala
26 Mar 2019
7 min read
We've all heard stories of China's technological advancements and how face recognition is really gaining a lot of traction. If you needed any proof of China's technological might over most countries of the world, it doesn't get any better (or scarier) than this. On Sunday, tech analyst at Tencent & WeChat, Matthew Brennan, tweeted a video of a facial recognition kiosk at Chengdu Shuangliu International Airport in the People’s Republic of China. The kiosk seemed to give Brennan minutely personalized flight information as he walked by after automatically scanning his face in just seconds. A simple 22 second video crossed over 1.2 million views in just over a day. The tweet went viral, with many commenters writing how dystopian or terrifying they found this technology. Suggesting how wary we should be of the proliferation of biometric systems like those used in China. https://twitter.com/mbrennanchina/status/1109741811310837760 “There’s one guarantee that I’ll never get to go to China now,” one Twitter user wrote in response. “That’s called fascism and it’s not moral or ok,” another comment read. https://twitter.com/delruky/status/1109812054012002304 Surveillance tech isn’t a new idea The airport facial recognition technology isn’t new in China, and similar systems are already being implemented at airports in the United States. In October, Shanghai’s Hongqiao airport reportedly debuted China’s first system that allowed facial recognition for automated check-in, security clearance, and boarding. And since 2016, the Department of Homeland Security has been testing facial recognition at U.S. airports. This biometric exit program uses photos taken at TSA checkpoints to perform facial recognition tests to verify international travelers’ identities. Documents recently obtained by Buzzfeed show that Homeland Security is now racing to implement this system at the top 20 airports in the U.S. by 2021. And it isn’t just the federal government that has been rolling out facial recognition at American airports. In May of 2017, Delta announced it was testing a face-scanning system at Minneapolis-Saint Paul International Airport that allowed customers to check in their bags, or, as the company called it in a press release, “biometric-based bag drop.” The airline followed up those tests with what it celebrated as “the first biometric terminal” in the U.S. at Atlanta’s Maynard H. Jackson International Airport at the end of last year. Calling it an “end-to-end Delta Biometrics experience,” Delta’s system uses facial recognition kiosks for check-in, baggage check, TSA identification, and boarding. The facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft. “Delta’s successful launch of the first biometric terminal in the U.S. at the world’s busiest airport means we are designing the airport biometric experience blueprint for the industry,” said Gil West, Delta’s COO. “We’re removing the need for a customer checking a bag to present their passport up to four times per departure – which means we’re giving customers the option of moving through the airport with one less thing to worry about, while empowering our employees with more time for meaningful interactions with customers.” Dubai International Airport's Terminal 3 will soon replace its security-clearance counter with a walkway tunnel filled with 80 face-scanning cameras disguised as a distracting immersive video. The airport has an artsy, colorful video security and customs tunnel that scans your face, adds you to a database, indexes you with artificial intelligence and decides if you're free to leave -- or not. Potential dangers surveillance tech could bring in From first glance, the kiosk does seem really cool. But it should also serve as a warning as to what governments and companies can do with our data if left unchecked. After all, if an airport kiosk can identify Brennan in seconds and show him his travel plans, the Chinese government can clearly use facial recognition tech to identify citizens wherever they go. The government may record everyone's face and could automatically fine/punish someone if they do break/bend the rules. Matter of fact, they are already doing this via their social credit system. If you are officially designated as a “discredited individual,” or laolai in Mandarin, you are banned from spending on “luxuries,” whose definition includes air travel and fast trains. This class of people, most of whom have shirked their debts, sit on a public database maintained by China’s Supreme Court. For them, daily life is a series of inflicted indignities – some big, some small – from not being able to rent a home in their own name, to being shunned by relatives and business associates. Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. Surveillance technology is pervasive in our society, leading to fierce debate between proponents and opponents. Government surveillance, in particular, has been brought increasingly under public scrutiny, with proponents arguing that it increases security, and opponents decrying its invasion of privacy. Critics have loudly accused governments of employing surveillance technologies that sweep up massive amounts of information, intruding on the privacy of millions, but with little to no evidence of success. And yet, evaluating whether surveillance technology increases security is a difficult task. From the War Resisters League to the Government Accountability Project, Data for Black Lives and 18 Million Rising, from Families Belong Together to the Electronic Frontier Foundation, more than 85 groups signed letters to corporate giants Microsoft, Amazon and Google, demanding that the companies commit not to sell face surveillance technology to the government. Shoshana Zuboff, writer of the book, The Age of Surveillance Capitalism mentions that the technology companies insist their technology is too complex to be legislated, there are companies that have poured billions into lobbying against oversight, and while building empires on publicly funded data and the details of our private lives. They have repeatedly rejected established norms of societal responsibility and accountability. Causes more harm to the Minority groups and vulnerable communities There has been a long history of use of surveillance technologies that will particularly impact vulnerable communities and groups such as immigrants, communities of color, religious minorities, even domestic violence and sexual assault survivors. Privacy is not only a luxury that many residents cannot afford. In surveillance-heavy precincts, for practical purposes, privacy cannot be bought at any price. Privacy advocates have sometimes struggled to demonstrate the harms of government surveillance to the general public. Part of the challenge is empirical. Federal, state, and local governments shield their high-technology operations with stealth, obfuscation, and sometimes outright lies when obliged to answer questions. In many cases, perhaps most, these defenses defeat attempts to extract a full, concrete accounting of what the government knows about us, and how it puts that information to use. There is a lot less mystery for the poor and disfavored, for whom surveillance takes palpable, often frightening forms. The question is, as many commenters pointed out after Brennan’s tweet, do we want this kind of technology available? If so, how could it be kept in check and not abused by governments and other institutions? That’s something we don’t have an answer for yet–an answer we desperately need. Alarming ways governments are using surveillance tech to watch you Seattle government arrange public review on the city’s surveillance tech systems The Intercept says IBM developed NYPD surveillance tools that let cops pick targets based on skin color
Read more
  • 0
  • 0
  • 15870

article-image-join-tableau-at-dreamtx-2020-heres-what-you-need-to-know-from-whats-new
Anonymous
09 Dec 2020
4 min read
Save for later

Join Tableau at DreamTX 2020—here’s what you need to know from What's New

Anonymous
09 Dec 2020
4 min read
Kristin Adderson December 9, 2020 - 7:54pm December 9, 2020 When Tableau joined the Salesforce family, we were excited to accelerate and extend our mission to help people see and understand data. That’s what building a Data Culture is all about! We started that work last year by bringing the Tableau Data Fam to Dreamforce. We got to know the vibrant Trailblazer community and introduced people to the innovative analytics platform that helps us deliver on the promise of our mission—through demos, mini-sessions, and a rockstar keynote. YTableau Data Fam joins the 2019 DreamforceWhile we can't get together in person this year, we're making the most of the annual conference and transforming Dreamforce into a virtual format called Dreamforce To You. We kicked off the event with a keynote on December 2, and now we want to invite you to join us for a special four-day event: DreamTX. Open to all Trailblazers, including the Tableau Data Fam, DreamTX takes place December 14-17, and is free for anyone to join. We’ll be bringing together people from all around the world to learn, connect, and share—directly from our homes. If you’re new to Salesforce, you might be wondering what DreamTX is all about and whether you should take part. So, here’s our guide to help you get the most from DreamTX. What is DreamTX? DreamTX—Dreamforce Trailblazer Experience—is a four-day virtual event jam-packed with demos, luminary sessions, customer success stories, and conversations with leadership with times friendly for Americas, Asia Pacific and Europe. With something for every line of business and every industry, you’ll have tons of opportunities to learn from customers and peers who have built resilience through 2020—all right from your couch. You’ll learn how the Customer 360 transforms businesses, hear amazing stories of leadership during a pandemic, hang out and connect with other Trailblazers, and even welcome some surprise guests for entertainment. Best of all, it’s free to everyone, making it the most inclusive Dreamforce ever. What will I learn? You can be sure that Tableau is rolling into DreamTX energized and ready to share how critical a Data Culture is to empower everyone. We’ll feature several sessions all about analytics, integration, and digital transformation, as well as a vision and roadmap session for the Tableau platform and Tableau CRM (formerly Einstein Analytics). Be sure to bookmark the “Unleash the Power of Data: Mulesoft and Tableau” session to learn how business and IT leaders can unlock data from disconnected applications to get actionable insights in one place—in Tableau of course! And check out “AI Predictions with Einstein Discovery” to learn about our newest AI integration. This session will teach you how to build and deploy trusted ML-powered predictions in Tableau with no code required, enabling more teams to benefit from the power of AI.  Ready to register and start planning your schedule? How do I get ready for DreamTX? Whether it’s your first Salesforce event or you’re a seasoned veteran, follow these tips to make the most of your DreamTX experience. 1. Register at Dreamforce.com Head over to Dreamforce.com and reserve your spot by clicking the “Sign Me Up” button in the top right corner.  If you’ve already created your Trailblazer Profile (say, from Dreamforce 2019), you can log in right away with your existing information. Otherwise, sign up in just a few steps to get in on all the learning and networking benefits that DreamTX and the Trailblazer Community provide! 2. Build your schedule Each DreamTX session will be 20–25 minutes long, spanning different themed channels. You can use Trail Maps as guides to which sessions are most relevant to you—for example, select “Analytics” from the drop-down menu to see the recommended sessions for our data rockstars. Then simply click on a session title and select “Bookmark” to save it to your personal DreamTX agenda, or “Add to Calendar” to save to your personal calendar. You can also view all sessions available and add sessions to your calendar using the Agenda Builder. (There’s a short video featured on that webpage to explain how.)  And don’t forget to check out the demos and workshops available on day four of Dream TX! They’re great learning opportunities on topics like the Tableau Viz Lightning Web Component, building advanced reports, and the Tableau CRM developer experience.  3. Enjoy DreamTX! When it’s time, throw on your most comfortable clothes, grab a snack, and head over to your favorite spot on the couch to get watching. That’s the beauty of DreamTX being virtual—everyone’s invited to participate and learn! We look forward to seeing our Data Fam there. Get started by registering now, and joining in the conversation on social at #DreamTX.
Read more
  • 0
  • 0
  • 15861

article-image-core-security-features-of-elastic-stack-are-now-free
Amrata Joshi
21 May 2019
3 min read
Save for later

Core security features of Elastic Stack are now free!

Amrata Joshi
21 May 2019
3 min read
Today, the team at Elastic announced that the core security features of the Elastic Stack are now free. They also announced about releasing Elastic Stack versions 6.8.0 and 7.1.0 and the alpha release of Elastic Cloud on Kubernetestoday. With the free core security features, users can now define roles that protect index and cluster level access, encrypt network traffic, create and manage users, and fully secure Kibana with Spaces. The team had opened the code for these features last year and has finally made them free today which means the users can now run a fully secure cluster. https://twitter.com/heipei/status/1130573619896225792 Release of Elastic Stack versions 6.8.0 and 7.1.0 The team also made an announcement about releasing versions 6.8.0 and 7.1.0 of the Elastic Stack, today. These versions do not contain new features but they make the core security features free in the default distribution of the Elastic Stack. The core security features include TLS for encrypted communications, file and native realm to create and manage users, and role-based access control to control user access to cluster APIs and indexes. The features also include allowing multi-tenancy for Kibana with security for Kibana Spaces. Previously, these core security features required a paid gold subscription, however, now, they are free as a part of the basic tier. Alpha release of Elastic Cloud on Kubernetes The team has also announced the alpha release of Elastic Cloud on Kubernetes (ECK) which is the official Kubernetes Operator for Elasticsearch and Kibana. It is a new product based on the Kubernetes Operator pattern that lets users manage, provision, and operate Elasticsearch clusters on Kubernetes. It is designed for automating and simplifying how Elasticsearch is deployed and operated in Kubernetes. It also provides an official way for orchestrating Elasticsearch on Kubernetes and provides a SaaS-like experience for Elastic products and solutions on Kubernetes. The team has moved the core security features into the default distribution of Elastic Stack to ensure that all clusters launched and managed by ECK are secured by default at creation time. The clusters that are deployed via ECK include free features and tier capabilities such as Kibana Spaces, frozen indices for dense storage, Canvas, Elastic Maps, and more. Users can now monitor Kubernetes logs and infrastructure with the help of Elastic Logs and Elastic Infrastructure apps. Few users think that security shouldn’t be an added feature, it should be inbuilt. A user commented on HackerNews, “Security shouldn't be treated as a bonus feature.” Another user commented, “Security should almost always be a baseline requirement before something goes up for public sale.” Few others are happy about this news. A user commented, “I know it's hard to make a buck with an open source business model but deciding to charge more for security-related features is always so frustrating to me. It leads to a culture of insecure deployments in environments when the business is trying to save money. Differentiate on storage or number of cores or something, anything but auth/security. I'm glad they've finally reversed this.” To know more about this news, check out the blog post by Elastic. Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more! AWS announces Open Distro for Elasticsearch licensed under Apache 2.0  
Read more
  • 0
  • 0
  • 15774

article-image-neo4j-introduces-aura-a-new-cloud-service-to-supply-a-flexible-reliable-and-developer-friendly-graph-database
Vincy Davis
07 Nov 2019
2 min read
Save for later

Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database

Vincy Davis
07 Nov 2019
2 min read
Last year, Neo4j had announced the availability of its Enterprise Edition under a commercial license that was aimed at larger companies. Yesterday, the graph database management firm introduced a new managed cloud service called Aura directed at smaller companies. This new service is developed for the market audience between the larger companies and Neo4j’s open source product. https://twitter.com/kfreytag/status/1192076546070253568 Aura aims to supply a flexible, reliable and developer-friendly graph database. In an interview with TechCrunch, Emil Eifrem, CEO and co-founder at Neo4j says, “To get started with, an enterprise project can run hundreds of thousands of dollars per year. Whereas with Aura, you can get started for about 50 bucks a month, and that means that it opens it up to new segments of the market.” Aura offers a definite value proposition, a flexible pricing model, and other management and security updates for the company. It will also provide scaling of the growing data requirements of the company. In simple words, Aura seeks to simplify developers’ work by allowing them to focus on building applications work while Neo4j takes care of the company’s database. Many developers are excited to try out Aura. https://twitter.com/eszterbsz/status/1192359850375884805 https://twitter.com/IntriguingNW/status/1192352241853849600 https://twitter.com/sixwing/status/1192090394244333569 Neo4j rewarded with $80M Series E, plans to expand company Neo4j 3.4 aims to make connected data even more accessible Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell Linux Foundation introduces strict telemetry data collection and usage policy for all its projects MongoDB is partnering with Alibaba
Read more
  • 0
  • 0
  • 15728
article-image-like-newspapers-google-algorithms-are-protected-by-the-first-amendment
Savia Lobo
10 Sep 2018
4 min read
Save for later

Like newspapers, Google algorithms are protected by the First amendment making them hard to legally regulate them

Savia Lobo
10 Sep 2018
4 min read
At the end of last month, Google denied U.S President Donald Trump’s accusatory tweet which said it’s algorithms favor liberal media outlets over right-wing ones. Trump’s accusations hinted at Google regulating the information that comes up in Google searches. However, governing or regulating algorithms and the decisions they make about which information should be provided and prioritized is a bit tricky. Eugene Volokh, a University of California-Los Angeles law professor and author of a 2012 white paper on the constitutional First Amendment protection of search engines, said, “Each search engine’s editorial judgment is much like many other familiar editorial judgments.” A similar scenario of a newspaper case from 1974 sheds light on what the government can control under the First Amendment, companies’ algorithms and how they produce and organize information. On similar lines, Google too has the right to protect its algorithms from being regulated by the law. Google has the right to protect algorithms, based on a 1974 case According to Miami Herald v. Tornillo 1974 case, the Supreme Court struck down a Florida law that gave political candidates the “right of reply” to criticisms they faced in newspapers. The law required the newspaper to publish a response from the candidate, and to place it, free of charge, in a conspicuous place. The candidate’s lawyers contended that newspapers held near monopolistic roles when it came to reaching audiences and that compelling them to publish responses was the only way to ensure that candidates could have a comparable voice. The 1974 case appears similar to the current scenario. Also, if Google’s algorithms are manipulated, those who are harmed will have comparatively limited tools through which to be heard. Back then, Herald refused to comply with the law. Its editors argued that the law violated the First Amendment because it allowed the government to compel a newspaper to publish certain information. The Supreme Court too agreed with the Herald and the Justices explained that the government cannot force newspaper editors “to publish that which reason tells them should not be published.” Why Google cannot be regulated by law Similar to the 1974 case, Justices used the decision to highlight that the government cannot compel expression. They also emphasized that the information selected by editors for their audiences is part of a process and that the government has no role in that process. The court wrote, “The choice of material to go into a newspaper and the decisions as to limitations on size and content of the paper, and treatment of public issues and public officials—fair or unfair—constitute the exercise of editorial control and judgment.” According to two federal court decisions, Google is not a newspaper and algorithms are not human editors. Thus, a search engine or social media company’s algorithm-based content decisions should not be protected in similar ways as those made by newspaper editors. The judge explained, “Here, the process, which involves the . . . algorithm, is objective in nature. In contrast, the result, which is the PageRank—or the numerical representation of relative significance of a particular website—is fundamentally subjective in nature.” Ultimately, the judge compared Google’s algorithms to the types of judgments that credit-rating companies make. These firms have a right to develop their own processes and to communicate the outcomes. Comparison of both journalistic protections and algorithms, was conducted in a Supreme Court’s ruling in Citizens United v. FEC in 2010. The case focused on the parts of the Bipartisan Campaign Reform Act that limited certain types of corporate donations during elections. Citizens United, which challenged the law, is a political action committee. Chief Justice John Roberts explained that the law, because of its limits on corporate spending, could allow the government to halt newspapers from publishing certain information simply because they are owned by corporations. This can also harm public discourse. Any attempt to regulate Google’s and other corporations’ algorithmic outputs would have to overcome: The hurdles the Supreme Court put in place in the Herald case regarding compelled speech and editorial decision-making, The Citizens United precedent that corporate speech, which would also include a company’s algorithms, is protected by the First Amendment. Read more about this news in detail on Columbia Journalism Review. Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology North Korean hacker charged for WannaCry ransomware and for infiltrating Sony Pictures Entertainment California’s tough net neutrality bill passes state assembly vote
Read more
  • 0
  • 0
  • 15722

article-image-tensorflow-team-releases-a-developer-preview-of-tensorflow-lite-with-new-mobile-gpu-backend-support
Natasha Mathur
18 Jan 2019
2 min read
Save for later

TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support

Natasha Mathur
18 Jan 2019
2 min read
The TensorFlow team released a developer preview of the newly added GPU backend support for TensorFlow Lite, earlier this week. A full open-source release for the same is planned to arrive later in 2019. The team has been using the TensorFlow Lite GPU inference support at Google for several months now in their products. For instance, using the new GPU backend accelerated the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. Similarly, using GPU backend support for the YouTube Stories and Playground Stickers, the team saw an increase in speed by up to 5-10x in their real-time video segmentation model across a variety of phones. They found out that the new GPU backend is much faster in performance (2-7x times faster) as compared to original floating point CPU implementation for different deep neural network models. The team also notes that GPU speed is most significant on more complex neural network models involving dense prediction/segmentation or classification tasks. For small models the speedup could be less and using CPU would be more beneficial as it would avoid latency costs during memory transfers. How does it work? The GPU delegate first gets initialized once the interpreter::ModifyGraphWithDelegate() is called in Objective-C++ or by calling Interpreter’s constructor with Interpreter.Options in Java. During this process, a canonical representation of the input neural network is built over which a set of transformation rules are applied. After this, the compute shaders are generated and compiled. GPU backend currently makes use of OpenGL ES 3.1 Compute Shaders on Android and Metal Compute Shaders on iOS. Various architecture-specific optimizations are employed while creating compute shaders. After the optimization is complete, the shader programs are compiled and the new GPU inference engine gets ready. Depending on the inference for each input, inputs are moved to GPU if required, shader programs get executed, and outputs are moved to CPU if necessary. The team intends to expand the coverage of operations, finalize the APIs and optimize the overall performance of the GPU backend in the future. For more information, check out the official TensforFlow Lite GPU inference release notes. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Google AdaNet, a TensorFlow-based AutoML framework
Read more
  • 0
  • 0
  • 15707

article-image-facebook-fined-2-3-million-by-germany-for-providing-incomplete-information-about-hate-speech-content
Sugandha Lahoti
03 Jul 2019
4 min read
Save for later

Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content

Sugandha Lahoti
03 Jul 2019
4 min read
Yesterday, German authorities said that they have imposed a 2 million-euro ($2.3 million) fine on Facebook under a law designed to combat hate speech. German authorities said that Facebook had provided "incomplete" information in mandatory transparency reports about illegal content, such as hate speech. Facebook received 1,704 complaints and removed 362 posts between January 2018 and June 2018. In the second half of 2018, the company received 1,048 complaints. In a statement to Reuters, Germany’s Federal Office of Justice said that by tallying only certain categories of complaints, the web giant had created a skewed picture of the extent of violations on its platform. It says, "The report lists only a fraction of complaints about illegal content which created a distorted public image about the size of the illegal content and the way the social network deals with the complaints.” The agency said Facebook’s report did not include complaints relating to anti-Semitic insults and material designed to incite hatred against persons or groups based on their religion or ethnicity. Germany’s NetzDG law has been criticized by experts The NetzDG law, under which Facebook was fined, is Germany's internet transparency law passed in 2017 for combating agitation and fake news in social networks. Under this law, commercial social networks are obliged to establish a transparent procedure for dealing with complaints about illegal content and are subject to a reporting and documentation obligation. Per the law, social media platform should check complaints immediately, delete "obviously illegal" content within 24 hours, delete any illegal content within 7 days after checking and block access to it. The deleted content must be stored for at least ten weeks for evidence purposes. In addition, providers must provide a service agent in Germany, both to the authorities and for civil proceedings and submit a six-monthly report on complaints received and how they have been dealt with. However, the law has been on the receiving end of constant criticism from various experts, journalists, social networks, UN, and the EU. Experts said that short and rigid deletion periods and the high threat of fines would compromise freedom of speech of individuals. The social networks will be forced to remove contributions in case of doubt, even if they require a context-related consideration. Facebook had also criticized the NetzDG draft. In a statement sent to the German Bundestag at the end of May 2017, the company stated, "The constitutional state must not pass on its own shortcomings and responsibility to private companies. Preventing and combating hate speech and false reports is a public task from which the state must not escape." In response to the fine, Facebook said, "We want to remove hate speech as quickly and effectively as possible and work to do so. We are confident our published NetzDG reports are in accordance with the law, but as many critics have pointed out, the law lacks clarity.” “ We will analyze the fine notice carefully and reserve the right to appeal,” Facebook added. Facebook is also facing privacy probes over its policies and data breaches and was fined by the EU for failing to give correct information during the regulatory review of its WhatsApp takeover. Last week, Italy's privacy regulator fined Facebook €1 million for violations connected to the Cambridge Analytica scandal. The agency said 57 Italians had downloaded a personality test app called ThisIsYourDigitalLife, which was used to collect Facebook information on both themselves and their Facebook friends. The app was then used to provide data to Cambridge Analytica, for targeting voters during the 2016 U.S. presidential election. Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily. Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan. YouTube’s new policy to fight online hate and misinformation misfires due to poor execution
Read more
  • 0
  • 0
  • 15698
article-image-apache-spark-2-3-now-native-kubernetes-support
Savia Lobo
07 Mar 2018
2 min read
Save for later

Apache Spark 2.3 now has native Kubernetes support!

Savia Lobo
07 Mar 2018
2 min read
Two of the leading open-source projects, Apache Spark and Kubernetes now collaborate: Apache Spark 2.3 has native Kubernetes support. Kubernetes : A natural fit for Apache Spark Apache Spark is a framework for large-scale data processing and an important tool for data scientists. It offers a robust platform to carry out major tasks; be it data transformation, analytics, or machine learning. Recently, data scientists have been embracing the concept of working with containers in order to improve their workflows. Benefits such as packaging of dependencies and creating reproducible artifacts can be leveraged by the container adoption. This is where Kubernetes, an open-source system for automating deployment, to scale and manage containerized environments, comes to the rescue. It enables one to run containerized applications within Spark. This combination of Apache Spark and Kubernetes has dual benefits. Firstly, data scientists get to use their principal tool i.e., Apache Spark’s ability to manage distributed data processing tasks and secondly, they can work with containers using Kubernetes API. With Apache Spark 2.3, users can run Spark workloads in an existing Kubernetes 1.7+ cluster. This means Apache Spark workloads can make direct use of Kubernetes clusters for multi-tenancy and sharing through Namespaces and Quotas. It can also make use of administrative features such as Pluggable Authorization and Logging. Also, Spark workloads require no changes or new installations on the Kubernetes cluster. One simply has to create a container image and set up the right RBAC roles for the Spark Application and it is ready. The native Kubernetes support offers a fine-grained management of Spark Applications along with improved elasticity, and seamless integration with logging and monitoring solutions. The community is also exploring advanced use cases such as managing streaming workloads and leveraging service meshes like Istio. Visit Databricks blog to read more on this topic.
Read more
  • 0
  • 0
  • 15685

article-image-duckduckgo-proposes-do-not-track-act-of-2019-to-require-sites-to-respect-dnt-browser-setting
Sugandha Lahoti
07 May 2019
3 min read
Save for later

DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting

Sugandha Lahoti
07 May 2019
3 min read
DuckDuckGo, the browser known for its privacy protection policies, has proposed draft legislation which will require sites to respect the Do Not Track browser setting. Called, the “Do-Not-Track Act of 2019”, this legislation will mandate websites to not track people if they have enabled the DNT signal on their browsers. Per a recent study conducted by DuckDuckGo, a quarter of people have turned on this setting, and most were unaware big sites do not respect it. [box type="shadow" align="" class="" width=""] Do-Not-Track Signal” means a signal sent by a web browser or similar User Agent that conveys a User’s choice regarding online Tracking, reflects a deliberate choice by the user. It complies with the latest Tracking Preference Expression (DNT) specification published by the World Wide Web Consortium (W3C)[/box] DuckDuckGo’s act just comes days after Google announced more privacy control to its users. Last week, Google launched a new feature allowing users to delete all or part of the location history and web and app activity data, manually.  It has a time limit for how long you want your activity data to be saved: 3 or 18 months, before deleting it automatically. However, it does not have an option to not store history automatically. DuckDuckGo’s proposed 'Do-Not-Track Act of 2019' legislation details the following points: No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn't be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history. The legislation would have exceptions for debugging, auditing, security, non-commercial research, and journalism. However, each of these exceptions would only apply if a site adopts strict data-minimization practices. These include using the least amount of personal information needed, and anonymizing it when possible. Also, restrictions would only come into play only if a consumer has turned on the Do Not Track setting in their browser settings. In case of violation of the Do-Not-Track Act of 2019, DuckDuckGo proposes an amount no less than $50,000 and no more than $10,000,000 or 2% of an Organization’s annual revenue, whichever is greater, can be charged by the legislators. If the act passes into law, sites would be required to cease certain user tracking methods, which means fewer data available to inform marketing and advertising campaigns. The proposal is still quite far from being turning into law but presidential candidate Elizabeth Warren’s recent proposal to regulate “big tech companies”, may give it a much-needed boost. Twitter users complimented the act. https://twitter.com/Bendineliot/status/1123579280892538881 https://twitter.com/jmhaigh/status/1123574469950414848 https://twitter.com/n0ahrabbit/status/1123572013153439745 For the full text, download the proposed Do-Not-Track Act of 2019. DuckDuckGo now uses Apple MapKit JS for its map and location-based searches DuckDuckGo chooses to improve its products without sacrificing user privacy ‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta
Read more
  • 0
  • 0
  • 15680
Modal Close icon
Modal Close icon