Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-is-the-youtube-algorithms-promoting-of-alternativefacts-like-flat-earth-having-a-real-world-impact
Sugandha Lahoti
21 Nov 2018
3 min read
Save for later

Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact?

Sugandha Lahoti
21 Nov 2018
3 min read
It has not been long since the Logan Paul controversy hit the internet and people criticized YouTube algorithms and complained that they were still seeing recommendations of Logan Paul’s videos, even when it was brought down. Earlier this week, a “Flat Earth Conference” was held at Denver, Colorado where some attendees talked about how Youtube has persuaded them to believe the flat earth theory. In fact, Logan Paul was also one of the conference’s keynote speakers, despite not believing that the Earth is flat. The attendees were interviewed by Daily Beast. In the conference, many participants told Daily Beast that they have come to believe in the Flat Earth theory based on YouTube videos. “It came on autoplay,” said Joshua Swift, a conference attendee. “So I didn’t actively search for Flat Earth. Even months before, I was listening to Alex Jones.” Recently, NBA star Kyrie Irving also spoke about his obsession with flat earth theory blaming YouTube videos for it. Irving spoke of having wandered deep down a “rabbit hole” on YouTube. This has brought the emphasis back on the recommendation system that YouTube uses. In a blog post, Guillaume Chaslot, and ex-googler who helped build the YouTube algorithm explains, “Flat Earth is not a ’small bug’. It reveals that there is a structural problem in Google's AIs and they exploit weaknesses of the most vulnerable people, to make them believe the darnedest things.” He mentions a list of Flat Earth videos which were promoted on Youtube. https://www.youtube.com/watch?v=1McqA9ChCnA   https://www.youtube.com/watch?v=XFSH5fnqda4 This makes one question whether the YouTube algorithm is evil? The YouTube algorithm recommends videos based on watch time. More watch time means more revenue and more scope for targeted ads. What this changes, is the fundamental concept of choice and the exercising of user discretion. The moment the YouTube Algorithm considers watch time as the most important metric to recommend videos to you, less importance goes into the organic interactions on YouTube, which includes liking, commenting and subscribing to videos and channels. Chaslot was fired by Google in 2013 over performance issues. His claim was that he wanted to bring about a change in the approach of the YouTube algorithm to make it more aligned with democratic values instead of being devoted to just increasing the watch time. Chaslot has created Algotransparency, a site that scans and monitors YouTube recommendations daily. Other Twitter users have also supported Chaslot’s article. https://twitter.com/tristanharris/status/1064973499540869121 https://twitter.com/technollama/status/1064573492329365504 https://twitter.com/sivavaid/status/1064527872667369473 Is YouTube’s AI Algorithm evil? YouTube has a $25 million plan to counter fake news and misinformation YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos
Read more
  • 0
  • 0
  • 18961

article-image-why-scepticism-is-important-in-computer-security-watch-james-mickens-at-usenix-2018-argue-for-thinking-over-blindly-shipping-code
Melisha Dsouza
21 Nov 2018
6 min read
Save for later

Why scepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code

Melisha Dsouza
21 Nov 2018
6 min read
"Technology, in general, and computer science in particular, have been hyped up to such an extreme level that we've ignored the importance of not only security but broader notions of ethical computing." -James Mickens We like to think that things are going to get better. That, after all, is why we get up in the morning and go to work, in the hope that we might just be making a difference, that we’re working towards something. That’s certainly true across the technology landscape. And in cybersecurity in particular, the belief that you’re building a more secure world - even if it’s on a small scale - is an energizing and motivating thought. However, at this year’s USENIX Conference back in August, Harvard Professor James Mickens attempted to put that belief to rest. His talk - titled ‘Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?’ - was an argument for scepticism in a field that is by nature optimistic (not least when it has a solution to sell). So, what exactly does Mickens have against keynote speakers? Quite a lot, actually: he jokingly calls them people who have made bad life decisions aand poorrole models. Although his tongue is firmly in his cheek, he does have a number of serious points. Fundamentally, he suggests developers do not invest time in questioning anything since any degree ofintrospection would “reduce the frequency of git commits”. Mickens argument is essentially thatsoftware developers are deploying new systems without a robust understanding of those systems. Why machine learning highlights the problem with computer science today Mickens stresses that such is the hype and optimism around modern technology and computer science  that the field has largely forgotten the value of scepticism. In turn, this can be dangerous for issues such as security and ethics. Take Machine Learning for instance. Machine learning is, Mickens sayss  “the oxygen that Silicon Valley is trying to force into our lungs.” It’s everywhere, we seem to need it - but it’s also being forced on us, almost blindly, Using the example of machine learning he illustrates his point about domain knowledge: Computer scientists do not have a deep understanding of the mathematics used in machine learning systems. There is no reason or incentive for computer scientists to even invest their time in learning those things. This lack of knowledge means ethical issues and security issues that may be hidden at a conceptual level - not a technical one - are simply ignored. Mickens compares machine learning to the standard experiment used in America since 8th grade: the egg drop experiment. This is where students desperately search for a solution to prevent the egg from breaking when dropped from 20 feet in the air. When they finally come up with a technique that is successful, Mickens explains, they don’t really care to understand the logic/math behind it. This is exactly the same as developers in the context of machine learning. Machine learning is complex, yes, but often, Mickens argues, developers will have no understanding as to why models generate a particular output on being provided with a specific input. When this inscrutable AI used in models connected with real life mission critical systems (financial markets, healthcare systems, news systems etc) and the internet, security issues arise. Indeed, it begins to raise even more questions than provide answers. Now that AI is practically used everywhere - even to detect anomalies in cybersecurity, it is somewhat scary that a technology which is so unpredictable can be used to protect our systems. Examples of poor machine learning design Some of the examples James presented that caught our attention were: Microsoft chatbot Tay- Tay was originally intended to learn language by interacting with humans on Twitter. That sounds all good and very noble - until you realise that given the level of toxic discourse on Twitter, your chatbot will quickly turn into a raving Nazi with zero awareness it is doing so.  Machine learning used for risk assessment and criminal justice systems have incorrectly labelled Black defendants as “high risk” -  at twice the rate of white defendants. It’s time for a more holistic approach to cybersecurity Mickens further adds that we need a more holistic perspective when it comes to security. To do this,, developers should ask themselves not only if a malicious actor can perform illicit actions on a system,  but also should a particular action on a system be possible and how can the action achieve societally-beneficial outcomes. He says developers have 3 major assumptions  while deploying a new technology: #1 Technology is Value-Neutral, and will therefore automatically lead to good outcomes for everyone #2 New kinds of technology should be deployed as quickly as possible, even if we lack a general idea of how the technology works, or what the societal impact will be #3 History is generally uninteresting, because the past has nothing to teach us According to Mickens developers assume way too much.  In his assessment, those of us working in the industry take it for granted that technology will always lead to good outcomes for everyone. This optimism goes hand in hand with a need for speed - in turn, this can lead us to miss important risk assessments, security testing, and a broader view on the impact of technology not just on individual users but wider society too. Most importantly, for Mickens, is that we are failing to learn from mistakes. In particular, he focuses on IoT security. Here, Mickens points out, security experts are failing to learn lessons from traditional network security issues. The Harvard Professor has written extensively on this topic - you can go through his paperon IoT security here. Perhaps Mickens talk was intentionally provocative, but there are certainly lessons - if 2018 has taught us anything, it’s that a dose of scepticism is healthy where tech is concerned. And maybe it’s time to take a critical eye to the software we build. If the work we do is to actually matter and make a difference, maybe a little negative is a good thing. What do you think? Was Mickens assessment of the tech world correct? You can watch James Mickens whole talk at Youtube UN on Web Summit 2018: How we can create a safe and beneficial digital future for all 5 ways artificial intelligence is upgrading software engineering “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018
Read more
  • 0
  • 0
  • 5814

article-image-facebook-ai-researchers-investigate-how-ai-agents-can-develop-their-own-conceptual-shared-language
Natasha Mathur
20 Nov 2018
4 min read
Save for later

Facebook AI researchers investigate how AI agents can develop their own conceptual shared language

Natasha Mathur
20 Nov 2018
4 min read
In a paper published earlier this month, a team of AI researchers at Facebook have been looking closely at how AI agents ‘understand’ images and the extent to which they can be said to develop a shared conceptual language. Building on earlier research that indicates “(AI) agents are now developing conceptual symbol meanings,” the Facebook research team attempted to dive deeper and look closely at how AI agents develop representations of visual inputs. What they found was intriguing - the conceptual ‘language’ that the AI agents seemed to share wasn’t in any way related to the input data, but instead what the researchers describe as a ‘shared conceptual pact’. This research is significant as it opens the lid on how agents in deep learning systems, and opens up new possibilities for understanding how they work. Background Researchers take their cue from current research into AI agents. This research runs visual ‘games’..“This… allows us to address the exciting issue of whether the needs of goal-directed communication will lead agents to associate visually-grounded conceptual representations to discrete symbols, developing natural language-like word meanings” reads the paper. However, most of the existing studies present only the analysis of the agents’ symbol usage. Very little attention is given to the representation of the visual input developed by the agents during the interaction process.  Researchers have made use of the referential games of Angeliki Lazaridou, a research scientist at Deepmind, where a pair of agents communicates about images using a fixed-size vocabulary. “Unlike in those previous studies, which suggested that the agents developed a shared understanding of what the images represented, our researchers found that they extracted no concept-level information”, reads the research paper. The paired AI agents would arrive at an image-based decision depending only on the low-level feature similarities. How does it work? Researchers implemented Lazaridou’s, same-image game and the different image game. In the same image game, the Sender and Receiver are shown the same two images (that are always of different concepts). In the different-image game, the Receiver is shown different images than the Sender’s every time. The experiments were repeated using 100 random initialization seeds. Researchers first looked at how playing the game affects the way agents “see” the input data. This involves figuring out which of the image embeddings differ from the input image representations, and from each other. Researchers then further predicted that as the training continues, Sender and Receiver representations become quite similar to each other, as well as the input ones. To finally compare the similarity structure of the input, Sender and the Receiver spaces, representational similarity analysis (RSA) from computational neuroscience is used by the researchers. AI agents reach an image-based consensus The paired agents in the game arrived at an ‘image-based consensus’ depending solely on low-level feature similarities, without determining, for instance, that pictures of a Boston terrier and a Chihuahua both represent dogs. In fact, the agents were able to reach this consensus despite being presented with similar patterns of visual noise, which included no recognizable objects. This confirmed the hypothesis that the Sender and Receiver are capable of communicating about the input data with no conceptual content at all. This, in turn, suggests that no concept-level information (e.g., features that would allow to identify the instances of the dog or chair category) has been extracted by the agents during the training process. For more information, check out the official research paper. UK researchers have developed a new PyTorch framework for preserving privacy in deep learning Researchers show that randomly initialized gradient descent can achieve zero training loss in deep learning UK researchers build the world’s first quantum compass to overthrow GPS
Read more
  • 0
  • 0
  • 12045

article-image-bill-gates-recommends-watching-hbos-silicon-valley-to-understand-how-silicon-valley-actually-works
Bhagyashree R
20 Nov 2018
3 min read
Save for later

Bill Gates recommends watching HBO’s Silicon Valley to understand how Silicon Valley actually works

Bhagyashree R
20 Nov 2018
3 min read
Bill Gates has become one of the fans of the hit HBO series Silicon Valley. In a blog post on Monday, he expressed that this series aptly and hilariously depicts how actually Silicon Valley functions today. Irrespective of whether you are in the tech industry or not, you must have come across the mention of “Silicon Valley” in news, blog posts, and social media. Silicon Valley is situated in Calfornia and homes many technology companies ranging from start-ups to tech giants like Apple, Facebook, and, Microsoft. This series is a parody of Silicon Valley. It is a story of five men who have founded a startup company. You watch them working hard, raising money for their product, and also fighting against a tech giant named Hooli. The series first premiered in 2014 and completed its 5th season this year. Gates said that the show does exaggerate a few things but at the same time it captures the reality really well, making the show very relatable: “The show is a parody but it captures a lot of truths. Most of the different personality types you see in the show feel very familiar to me. The programmers are smart, super-competitive even with their friends, and a bit clueless when it comes to social cues." Gates further added that he relates the most with Richard Hendricks, the inventor of Pied Piper. Maybe he relates so much with this character because, just as Richard, he was a great programmer from a very young age but his leadership ways were considered as very demanding and slightly abrasive: “Personally, I identify most with Richard, the founder of Pied Piper, who is a great programmer but has to learn some hard lessons about managing people.” The reason why the series is so relatable is that the producers and writers have done an outstanding research to accurately depict the struggles of a startup company, tech politics, and more. The show's writers consult with real startups and founders. Last year they even met Bill Gates to know more about the history of the industry and get more ideas for its 5th season. The popularity of the show has grown so much so that some viewers believe that the series also gave them insights about the industry: https://twitter.com/TobyGrubbs/status/1064589218830270465 Along with the praises, Bill Gates does have some complaints about the show. He believes that the show does give viewers the impression that most small companies are capable of building great products while big companies only have money and are incompetent skill wise. In defense to that he said: “Although I’m obviously biased, my experience is that small companies can be just as inept, and the big ones have the resources to invest in deep research and take a long-term point of view that smaller ones can’t afford. But I also understand why the show focuses so much on Pied Piper and makes Hooli look so goofy. It’s more fun to root for the underdog.” You can read the entire blog post by Bill Gates on gatesnotes. Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge The software behind Silicon Valley’s Emmy-nominated ‘Not Hotdog’ app
Read more
  • 0
  • 0
  • 1351

article-image-redisgraph-v1-0-released-benchmarking-proves-its-6-600-times-faster-than-existing-graph-databases
Melisha Dsouza
20 Nov 2018
4 min read
Save for later

RedisGraph v1.0 released, benchmarking proves its 6-600 times faster than existing graph databases

Melisha Dsouza
20 Nov 2018
4 min read
RedisGraph was released in beta mode six months ago. On the 14th of November, RedisLabs announced the general availability of RedisGraph v1.0. RedisGraph is a Redis module that adds a graph database functionality to Redis. RedisGraph delivers a fast and efficient way to store, manage and process graphs, around 6 to 600 times faster than existing graph databases. RedisGraph represents connected data as adjacency matrices and employs the power of GraphBLAS which is a highly optimized library for sparse matrix operations. How does RedisGraph Work? Redis is a single-threaded process by default. RedisGraph is bound to the single thread of Redis and supports all incoming queries while including a threadpool that takes a configurable number of threads at the module’s loading time to handle higher throughputs. The queries are calculated in one of the threads of the threadpool. This means reads can scale and handle large throughput easily. Each query only runs in one thread. This is what separates RedisGraph from other graph database implementations—which typically execute each query on all available cores of the machine. This makes RedisGraph more suitable for real-time and real-world use cases where high throughput and low latency under concurrent operations are important. In RedisGraph, a write query ( that modifies the graph in any way ) must be executed in complete isolation. RedisGraph also ensures write/readers separation by using a read/write (R/W) lock. This means that either multiple readers can acquire the lock or just a single writer can write a query.  The lock ensures that as long as a writer is executing, no one can acquire the lock, and as long as there’s a reader executing, no writer can obtain the lock. Benchmarking RedisGraph The team conducted a benchmark test on RedisGraph that proved the latter’s speed was more than other graph databases. They used a simple benchmark released by TigerGraph that covered the following: Data loading time Storage size of loaded data Query response time for k-hop neighborhood count Query response time for weakly connected components and page rank The TigerGraph benchmark compared all other graph databases and reported TigerGraph to be 2-8000 times faster than any other graph database. The Redis team compared RedisGraph using the exact same setup. The test focused mainly on the k-hop neighborhood count query. To test the result of concurrent operations, parallel requests were added to the TigerGraph benchmark. RedisGraph utilized just a single core and other graph databases were using up to 32 cores. ReddisGraph was faster in response times than any other graph database (with the exception of TigerGraph in the single request k-hop queries tests on the Twitter dataset). The single request benchmark test and parallel request benchmark test also returned positive results for RedisGraph. In all the tests conducted, RedisGraph never timed out or generated out of memory exceptions. RedisGraph shows performance improvements under load of 6 to 60 times faster than existing graph solutions for a large dataset (twitter dataset) and 20 to 65 times faster on a normal data set (graph500 dataset). The benchmark also proves that RedisGraph outperforms Neo4j, Neptune, JanusGraph and ArangoDB on a single request response time with improvements almost 36 to 15,000 times faster. There were 2X and 0.8X faster single request response times as compared to TigerGraph. Future improvements as listed by the team include: Performance improvements for aggregations and large result sets A faster version of GraphBLAS More Cypher clauses/functionality to support even more diverse queries Integration for graph visualization software LDBC benchmarking tests You can head over to RedisLabs official Blog to know more about the benchmarking tests conducted. Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation 2018 is the year of graph databases. Here’s why.
Read more
  • 0
  • 0
  • 12067

article-image-verizon-hosted-ericsson-2018-oss-bss-user-group-with-a-quest-for-easy-theme
Amrata Joshi
20 Nov 2018
3 min read
Save for later

Verizon hosted Ericsson 2018 OSS/BSS User Group with a ‘Quest For Easy’ theme

Amrata Joshi
20 Nov 2018
3 min read
On 14th and 15th November, Verizon hosted the Ericsson’s 2018 OSS/BSS (Operations Support Systems/ Business Support Systems) user Group conference. The theme of the conference was ‘Quest for easy.’ The conference included presentations, demos, panels discussions and meetings with service providers from all over the world. This year the conference was held in New York, USA and was a two-day long event. The participants got some wonderful insights from the customers who shared their OSS/BSS experiences. The attendees also got a chance to go through around ten demos across Ericsson’s OSS/BSS portfolio. They got enlightened by the idea of what ‘Quest For Easy’ can mean for consumers and enterprises, operations and businesses. They got some information on how can ‘Quest For Easy’ create an impact on service providers and the products and services they offer. Highlights from the speakers Experts from Ericsson made the event more interesting. Emanuele Iannetti, Head of Solution Area BSS gave some useful and latest information to the audience about the Ericsson BSS portfolio. Marton Sabli, Head of Solution and Service Readiness for Solution Area OSS, Rick Mallon, Head of BSS Catalog and Order Management and Mats Karlsson, Head of Solution Area OSS also gave talks. Marton Sabli spoke about the orchestration of network slices and service assurance in a multi-vendor environment. Mats Karlsson gave his insight on 5G, right from enablers to monetization and Rick Mallon explained how Catalog and Order Care help in achieving simplicity and automation. Mohit Gupta, Director of Product Management at Ericsson, explained the audience the importance of insight-driven automation. Insights from the panel discussions The panel discussions were interesting as the customers got a chance to speak about the challenges, opportunities and their overall point of view. These panel discussions were divided into two tracks. The first one was based on revenue management while the other one covered service management, orchestration, and analytics. Arthur D. Little led one of the panels in which the customers including Verizon, Sprint and T-Mobile shared their views on the challenges and opportunities related to 5G and IoT. They also shared their experiences with digital transformation and monetization. Gary G Fujinami, Director of Performance Analytics Development / Operations at Verizon, threw some light on challenges of management and orchestration in an NFV (Network Functions Platform) enabled multi-operator world. Read more about this news on Ericsson. OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’ Tableau 2019.1 beta announced at Tableau Conference 2018 “We call on the UN to invest in data-driven predictive methods for promoting peace”, Nature researchers on the eve of ViEWS conference
Read more
  • 0
  • 0
  • 9359
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-rex-js-v1-0-0-a-companion-library-for-regex-written-in-typescript
Prasad Ramesh
20 Nov 2018
2 min read
Save for later

Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript

Prasad Ramesh
20 Nov 2018
2 min read
ReX.js is a helper library written in TypeScript for writing Regular Expressions. Yesterday, ReX.js v1.0.0, the first major version was released. Being written in TypeScript, it provides great autocompletion and development experience across various modern code editors. One of the main advantages of using ReX.js is its ability to document every line of code without hassles. Anatomy of  ReX.js v1.0.0 ReX.js is structured as namespace consisting of the following modules: Matcher: It is the class used to construct and use matching expressions. Replacer: The Replacer class is used to construct and use replacement expressions. Operation: This class represents a basic operation that is applied to expressions constructors. Parser: The parser class used to parse and execute Regexps. It is used by Matcher and implements polyfills for named groups and partially for look behinds. ReXer: It is used to construct Regexps. The Matcher and Replacer classes inherit from ReXer. The GitHub page says that the Matcher and Replacer classes will be used more likely by developers. The other classes would more likely be used for extendability and advanced use cases. Advanced use of ReX.js v1.0.0 Beyond basic Regex operations, ReX.js also provides options for extending its functionality. Operations and channels Every method used in ReX.js is just adding a new Operation to ReXer. An Operation can then be stringified using its own stringify method. A concept of channels is introduced to construct linear Regexps from nested function expressions. A channel is simply an array of Operations. The channels themselves are stored as an array in ReXer. Snippets Snippets are available if you want to reuse any kind of Operation configuration. Snippets provide an option to assign the given config to a name for later reuse. Methods and extensions Methods are ways to reuse and apply custom operations while extensions are just arrays of methods. Installing ReX.js v1.0.0 ReX.js is available on NPM as a package. You can include it in your current project by using: npm install @areknawo/rex If you’re using Yarn, then use the following command: yarn add @areknawo/rex For more details and documentation, visit the ReX.js GitHub page. Manipulating text data using Python Regular Expressions (regex) Introducing Howler.js, a Javascript audio library with full cross-browser support low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 18799

article-image-a-multi-factor-authentication-outage-strikes-microsoft-office-365-and-azure-users
Savia Lobo
20 Nov 2018
2 min read
Save for later

A multi-factor authentication outage strikes Microsoft Office 365 and Azure users

Savia Lobo
20 Nov 2018
2 min read
Yesterday, Microsoft Azure and Office 365 users had trouble logging into their accounts. The problem for this is a multi-factor authentication issue which prevented users to sign into their services. The outage started at 04:39 UTC, yesterday, with Azure Active Directory users struggling to gain access to their accounts when multi-factor authentification (MFA) was enabled. The issue continued for almost seven hours. A notice confirming the outage was put up on Office 365’s service health page stating, “Affected users may be unable to sign in”. The impact of this outage is specific to any user who is located in Europe, Middle East and Africa (EMEA) or Asia Pacific (APAC) regions. According to Azure’s status page, “Engineers have explored mitigating a back-end service via deploying a code hotfix, and this is currently being validated in a staging environment to verify before potential roll-out to production. Engineers are also continuing to explore additional workstreams to expedite mitigation.” Azure engineers said that they are also developing an alternative code update to resolve the connectivity issue between MFA and the cache provider. Pete Banham, cyber resilience expert at Mimecast, reported to CBR in an email statement, “With less than a month between disruptions, incidents like today’s Azure multi-factor authentication issue pose serious productivity risks for those sticking to a software-as-a-service monoculture.” He further added, “No organization should trust a single cloud supplier without an independent cyber resilience and continuity plan to keep connected and productive during unplanned, and planned, email outages. Every minute of an email outage could costs businesses hundreds and thousands of pounds.” According to Office 365 status page, "We've observed continued success with recent MFA requests and are taking additional actions in the environment in an effort to prevent new instances of this problem. Our investigation into the root cause of this problem is ongoing and maintained as our highest priority." To know more about this news in detail, head over to Techcrunch. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users Basecamp 3 faces a read-only outage of nearly 5 hours
Read more
  • 0
  • 0
  • 14616

article-image-introducing-euclidesdb-a-multi-model-machine-learning-feature-database
Bhagyashree R
20 Nov 2018
2 min read
Save for later

Introducing EuclidesDB, a multi-model machine learning feature database

Bhagyashree R
20 Nov 2018
2 min read
Yesterday, EuclidesDB, a multi-model machine learning feature database released its v0.1. EuclidesDB 0.1 is tightly coupled with PyTorch and provides a backend for including and querying data on the model feature space. EuclidesDB provides a simple standalone server that stores, builds indexes, and serves requests using efficient serialization and protocols with an easy API. It provides APIs for including new data into the database and querying it later. Since it uses gRPC (gRPC Remote Procedure Call) for communication, the API can be consumed in many different languages. As mentioned earlier, it comes with a very tight integration with PyTorch, where libtorch is used as the backend to run traced models. And, thus provides a very easy pipeline to integrate new models into the Euclides DB C++ backend. The concept behind EuclidesDB EuclidesDB is based on two main concepts: Module/Model are terms used interchangeably to represent every computation. Model Space represents a space of features generated by a model. When a user adds a new image or other kinds of data into the database, they need to specify which model should be used to index this data. This data is then forwarded into the specified models and their features are saved into a local key-value database. Similarly, when a user queries for similar items on a model space, they need to make a request with a new image and specify on which model spaces they want to find similar items. Then similar items for each model space is returned together with their relevance. Features of EuclidesDB v0.1 Euclides v0.1: Uses gRPC as protocol communication and protobuf as a serialization mechanism for its communication with client APIs. Uses LevelDB for database serialization. Uses LSH (Locality Sensitive Hashing) for approximate nearest neighbors. Comes with PyTorch integration through libtorch. Provides easy integration for new custom fine-tuned models. EuclidesDB is currently in its initial release and many new features will be introduced in the future versions. The client API is also expected to change in the upcoming releases before a robust API design is stabilized. To know more in detail, check out EuclidesDB’s official website. FoundationDB 6.0.15 releases with multi-region support and seamless failover management ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features Redbird, a modern reverse proxy for node
Read more
  • 0
  • 0
  • 10380

article-image-tim-cook-criticizes-google-for-their-user-privacy-scandals-but-admits-to-taking-billions-from-google-search
Amrata Joshi
20 Nov 2018
3 min read
Save for later

Tim Cook criticizes Google for their user privacy scandals but admits to taking billions from Google Search

Amrata Joshi
20 Nov 2018
3 min read
In September, Goldman Sachs estimated that almost $9 billion dollar revenue is coming to Apple from Google for being the built-in search engine on Apple’s Safari web browsers. Till then, Apple had never talked about its revenue stream from Google. However, last week, Tim Cook, CEO, Apple participated in an interview by Axios on HBO. In the interview, he was asked if he agreed of taking billion dollars from Google. He casually replied to the question stating, “I think their (Google’s) search engine is the best”. He also admitted that Apple-Google partnership was not "perfect." He further defended Apple’s multi-billion dollar deal with Google search by talking about the additional security measures that Apple has added to Safari to "help" users better navigate the Google search engine. These include private web browsing and an intelligent tracker prevention. He stated in the interview, "Look at what we've done with the controls we've built in. We have private web browsing. We have an intelligent tracker prevention, What we've tried to do is come up with ways to help our users through their course of the day. It's not a perfect thing. I'd be the very first person to say that. But it goes a long way to helping." Apple has been quite vocal about not selling targeted advertisements based on user information. Cook has criticized Google, Facebook, and other social media platforms for mishandling user privacy. He has claimed that Apple’s business model depends on selling hardware such as smartphones and tablets and that they are very particular about user privacy. Last month, Cook had also given a speech at a privacy conference in Brussels where he mentioned his concerns on privacy in various social media platforms. He had also called for new digital privacy laws in the United States. His concerns involved, users' personal data collection by companies, data manipulation, and lack of surveillance. People on the internet are not much in favor of this news. Twitter users are raising eyebrows on Cook’s casual statement and the fact they are taking millions of dollars from Google even if they disagree with its policies in the first place. https://twitter.com/b_fung/status/1064552025864765441   https://twitter.com/christianring/status/1064614295395282947 Apple was previously using Bing as its default browser in 2017. However, the company switched to Google because it faced consistency issues with Bing. It’s still not sure if the main reason to switch to Google was the company’s expectations of consistent results or the multi-billion deal! You can see a snippet of Tim Cook’s interview on Axios. Newer Apple maps is greener and has more details A kernel vulnerability in Apple devices gives access to remote code execution Gaël Duval, creator of the ethical mobile OS, /e/, calls out Tim Cook for being an ‘opportunist’ in the ongoing digital privacy debate
Read more
  • 0
  • 0
  • 8906
article-image-apex-ai-announced-apex-os-and-apex-autonomy-for-building-failure-free-autonomous-vehicles
Sugandha Lahoti
20 Nov 2018
2 min read
Save for later

Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles

Sugandha Lahoti
20 Nov 2018
2 min read
Last week, Alphabet’s Waymo announced that they will launch the world’s first commercial self-driving cars next month. Just two days after that, Apex.AI. announced their autonomous mobility systems. This announcement came soon after they closed a $15.5MM Series A funding, led by Canaan with participation from Lightspeed. Basically, Apex. AI designed a modular software stack for building autonomous systems. It easily integrates into existing systems as well as 3rd party software. An interesting thing they claim about their system is the fact that “The software is not designed for peak performance — it’s designed to never fail. We’ve built redundancies into the system design to ensures that single failures don’t lead to system-wide failures.” Their two products are Apex.OS and Apex.Autonomy. Apex.OS Apex.OS is a meta-operating system, which is an automotive version of ROS (Robot Operating System). It allows software developers to write safe and secure applications based on ROS 2 APIs. Apex.OS is built with safety in mind. It is being certified according to the automotive functional safety standard ISO 26262 as a Safety Element out of Context (SEooC) up to ASIL D. It ensures system security through HSM support, process level security, encryption, authentication. Apex.OS improves production code quality through the elimination of all unsafe code constructs. It ships with support for automotive hardware, i.e. ECUs and automotive sensors. Moreover it comes with a complete documentation including examples, tutorials, design articles, and 24/7 customer support. Apex.Autonomy Apex.Autonomy provides developers with building blocks for autonomy. It has well-defined interfaces for easy integration with any existing autonomy stack. It is written in C++, is easy to use, and can be run and tested on Linux, Linux RT, QNX, Windows, OSX. It is designed with production and ISO 26262 certification in mind and is CPU bound on x86_64 and amd64 architectures. A variety of LiDAR sensors are already integrated and tested. Read more about the products on Apex. AI website. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race. Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles.
Read more
  • 0
  • 0
  • 17320

article-image-foundationdb-6-0-15-releases-with-multi-region-support-and-seamless-failover-management
Natasha Mathur
20 Nov 2018
3 min read
Save for later

FoundationDB 6.0.15 releases with multi-region support and seamless failover management

Natasha Mathur
20 Nov 2018
3 min read
The FoundationDB team released version 6.0.15 of its distributed, NoSQL database, yesterday. FoundationDB 6.0.15 explores new features such as multi-region support, seamless failover management, along with performance changes, and bug fixes. FoundationDB is an open source, multi-model datastore by Apple that lets you store multiple data types in a single database. All data in FoundationDB is safely stored, distributed, and replicated in the Key-Value Store component. FoundationDB offers high performance on commodity hardware, helping you support very heavy loads at a low cost. Let’s have a look at what’s new in FoundationDB 6.0.15. New features FoundationDB 6.0.15 offers native multi-region support that dramatically increases your database's global availability. This also offers greater control over how failover scenarios are managed. Seamless failover is now possible in FoundationDB 6.9.15, allowing your cluster to survive the loss of an entire region without any service interruption. These features can be further deployed so that clients experience low-latency, single-region writes. Support has been added for asynchronous replication to a remote DC with processes in a single cluster. This improves the asynchronous replication provided by fdbdr as servers can fetch data from the remote DC in case all the other replicas have been lost in one DC. Additional support has been added for synchronous replication of the transaction log to a remote DC. This makes sure that the remote DC need not contain any storage servers. The TLS plugin has been statically linked into the client and server binaries. There is no longer a need for a separate library. The fileconfigure command has been added to fdbcli which configures a database from a JSON document. Performance changes The master recovery time for clusters with large amounts of data has been significantly reduced. Recovery time has been significantly reduced for cases where rollbacks are executed on the memory storage engine. Clients can now update their key location cache much more efficiently after the reboots of storage servers. Multiple resolver configurations have been tuned to carry out the job balancing work more efficiently between each resolver. Bug Fixes Clusters that been configured to use TLS would get stuck, leading to all their CPUs getting used for opening new connections. This issue has been fixed now. The issue of TLS certificate reloading causing the TLS connections to drop until the processes were restarted has been fixed. The issue of Watches registered on a lagging storage server taking a long time to trigger has been fixed. Other Changes The capitalization of trace event names and attributes has been normalized. Memory requirements of the transaction log have been increased by 400MB. The replication factor in status JSON has been stored under redundancy_mode instead of redundancy.factor. The metric data_version_lag is replaced by data_lag.versions and data_lag.seconds. Several additional metrics have been added for the number of watches and mutation count and are exposed through status. For more information on FoundationDB 6.0.15, check out the official release notes. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code BlazingDB announces BlazingSQL , a GPU SQL Engine for NVIDIA’s open source RAPIDS MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas
Read more
  • 0
  • 0
  • 9172

article-image-microsofts-move-towards-ads-on-the-mail-app-in-windows-10-sparks-privacy-concerns
Amrata Joshi
19 Nov 2018
4 min read
Save for later

Microsoft’s move towards ads on the Mail App in Windows 10 sparks privacy concerns

Amrata Joshi
19 Nov 2018
4 min read
Microsoft had planned to bring ads to the Mail App in Windows 10. It also has an entire support page dedicated to ads on mail. But last week after the backlash from the people, Frank X. Shaw, the Head of Communications at Microsoft claimed on Twitter that ads on the Mail app were not intended to be tested broadly. Though it has been turned off now. https://twitter.com/fxshaw/status/1063518403036557312 According to Microsoft, the ads will appear for all users. Even if one doesn’t use a Microsoft email service like Outlook and only have Gmail, Yahoo, G Suite, or other third-party accounts, the ad will still be visible until one purchases an Office 365 subscription. The team at Microsoft is having a pilot running in Brazil, Canada, Australia, and India to get user feedback on ads in Mail. These ads will be visible on Windows Home and Windows Pro but not on Windows EDU or Windows Enterprise. Microsoft chooses Interest-based advertising for its users Windows generates an advertising ID for each user on the device. When the advertising ID is enabled, both Microsoft apps and third-party apps can access and use the advertising ID. It is similar to the websites that access and use a unique identifier stored in a cookie. Mail app uses this ID to provide more relevant advertising to users. Also, the Mail app may use the demographic information to make ads more relevant to the users. This is possible for the users who have logged into Windows with a Microsoft Account. Users can turn off interest-based advertising at any time. If a user turns off the interest-based advertising, the user will still see ads but they won’t be relevant to the interests. As per the Support page of Microsoft, these interest-based ads do not check the user’s emails to display ads. Microsoft does not use personal information, like the content of the email, calendar, or contacts, to target the users for ads. Microsoft doesn’t use the content in the mailbox or in the Mail app. But privacy is still a concern while referring to Microsoft. As per a report by Privacy Company, Microsoft collects and stores users personal without any public documentation. Microsoft systematically collects data about the individual use of Word, Excel, Outlook, and PowerPoint without letting users know. Since the data stream is encoded, Microsoft does not offer any choice to switch off the data collection, or ability to see what data has been collected. For example, Microsoft collects information about events in Word, when you use the backspace key a number of times in a row, which probably means you do not know the correct spelling. But also the sentence before and after a word that you look up in the online spelling checker or translation service. Microsoft‘s use of the telemetry data is one of the biggest concerns of the report as Microsoft is regularly pushing more and more services off-premise. Data Protection Impact Assessment (DPIA) show that the new methods like Microsoft cloud, in SharePoint, OneDrive, Office 365 come with high data protection risks for data subjects. The blog states that Microsoft has already made commitments to make adjustments to its software to accommodate privacy concerns, e.g. a telemetry data viewer tool and a new “zero-exhaust setting.” Privacy Company outlines six high risks for data subjects The unlawful storage of classified/sensitive/special categories of data, both in metadata and in subject lines of the e-mail. The incorrect qualification of Microsoft as a data processor, instead of a joint controller. Insufficient control over factual data processing and sub-processors. The lack of purpose limitation, both for the processing of historically collected data and the possibility to dynamically add new types of events The transfer of diagnostic data outside of the European Economic Area (EEA), while the current legal ground for Office ProPlus is the Privacy Shield and the validity of this agreement is subject of a procedure at the European Court of Justice. The indefinite retention period of diagnostic data and also the lack of a tool to delete historical, diagnostic data. The Privacy Company recommends admins of the enterprise few measures to lower the privacy risk for employees and other users. It suggests to not use SharePoint Online / OneDrive. It advises to not use the web-only version of Office 365. The company also suggests using a stand-alone deployment without Microsoft account for confidential/sensitive data. Read more about the news on the DPIA’s pdf. Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge
Read more
  • 0
  • 0
  • 10023
article-image-blackberry-is-acquiring-ai-cybersecurity-startup-cylance-to-expand-its-next-gen-endpoint-solutions-like-its-autonomous-cars-software
Savia Lobo
19 Nov 2018
2 min read
Save for later

Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software

Savia Lobo
19 Nov 2018
2 min read
On Friday, Blackberry announced its plans to acquire Cylance on Friday for $1.4 billion in cash to help expand its QNX unit, which makes software for next-generation autonomous cars. According to Blackberry, “Cylance will operate as a separate business unit within BlackBerry Limited”. This deal is expected to close by February 2019. Describing the Cylance acquisition, BlackBerry CEO John Chen said, “Cylance’s leadership in artificial intelligence and cybersecurity will immediately complement our entire portfolio, UEM, and QNX in particular. We are very excited to onboard their team and leverage our newly combined expertise. We believe adding Cylance’s capabilities to our trusted advantages in privacy, secure mobility, and embedded systems will make BlackBerry Spark indispensable to realizing the Enterprise of Things.” Technology from Cylance will be leveraged in critical areas of Blackberry’s Spark Platform. This Spark Platform is a next-generation secure chip-to-edge communications platform for the EoT (Enterprise of Things) that will create and leverage trusted connections between any endpoint. It enables organizations to comply with stringent multi-national regulatory requirements. Cylance’s CEO Stuart McClure, said, “Our highly skilled cybersecurity workforce and market leadership in next-generation endpoint solutions will be a perfect fit within BlackBerry where our customers, teams, and technologies will gain immediate benefits from BlackBerry’s global reach. We are eager to leverage BlackBerry’s mobility and security strengths to adapt our advanced AI technology to deliver a single platform.” To know more about this acquisition head over to the official press release. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever
Read more
  • 0
  • 0
  • 13888

article-image-hackers-claim-to-have-compromised-protonmail-but-protonmail-calls-it-a-hoax-and-failed-extortion-attempt
Amrata Joshi
19 Nov 2018
3 min read
Save for later

Hackers claim to have compromised ProtonMail, but ProtonMail calls it ‘a hoax and failed extortion attempt’.

Amrata Joshi
19 Nov 2018
3 min read
Last week, hackers attempted to extort ProtonMail by alleging a data breach with no evidence. One of the alleged hackers named, AmFearLiathMor has written in the message that, “We hacked Protonmail and have a significant amount of their data from the past few months. We are offering it back to Protonmail for a small fee if they decline then we will publish or sell user data to the world.” ProtonMail is one of the largest secure email services developed by CERN and MIT. The team at ProtonMail clarified, “We have no indications of any breach from our internal infrastructure monitoring.” Though, with further investigation, the team traced the source of the rumors on 4chan, a simple image-based bulletin board, where anyone can post comments and share images anonymously. The claims there included: CNN employees use ProtonMail and refer to the American people as prostitutes. Michael Avenatti uses ProtonMail and has a BDSM fetish. Private military contractors used ProtonMail to discuss circumventing the Geneva convention, underwater drone activities in the Pacific Ocean, and possible international treaty violations in Antarctica. Rampant pedophilia among high ranking government officials who use ProtonMail. ProtonMail's team said, “We believe that this is a hoax and failed extortion attempt, and there is zero evidence to suggest otherwise.”  For example, the criminals claimed that ProtonMail is vulnerable because the company doesn’t use SRI (Subresource Integrity), but this claim is baseless because it doesn't use any third party CDNs (content delivery networks) to serve the web app. ProtonMail only uses web servers that specifically eliminate the potential attack vector. The team said, “We are aware of a small number of ProtonMail accounts which have been compromised as a result of those individual users falling for phishing attacks (this is why we encourage using 2FA). However, we currently have zero evidence of a breach of our infrastructure.” As per the report by BleepingComputer, the hackers might send $20 in bitcoin to the one who would spread the word about this hack using #Protonmail on Twitter. People have given a mixed reaction to this news. Many are just scared and do not wish to take any risks and suggest to change the password. https://twitter.com/ProtonMail/status/1063392853014048768   https://twitter.com/crytorekt1/status/1063452592792051713 The team said, “The best way to ensure that they (criminals) do not succeed is to ignore them.” As a lot of users find this platform secure, this alleged hacking news, which is probably false, has still managed to create some impact on the users. The latest announcement on the Read recipients feature by the company could be a small distraction but is it enough to move the attention from the hacking news? https://twitter.com/ProtonMail/status/1063485043660734464 Read more about this news on Reddit. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Cathay Pacific, a major Hong Kong based airlines, suffer data breach affecting 9.4 million passengers Timehop suffers data breach; 21 million users’ data compromised
Read more
  • 0
  • 0
  • 11612
Modal Close icon
Modal Close icon