Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-apple-now-allows-u-s-users-to-download-their-personal-data-via-its-online-privacy-data-portal
Savia Lobo
18 Oct 2018
3 min read
Save for later

Apple now allows U.S. users to download their personal data via its online privacy data portal

Savia Lobo
18 Oct 2018
3 min read
Yesterday, Apple started allowing U.S. users to download a copy of all their data the company stores as a part of their privacy data portal expansion. The company had announced this feature expansion earlier this year. Per Bloomberg, prior to making this functionality available to U.S users, Apple rolled out the same functionality in Europe earlier this year as part of the European Union’s General Data Protection Regulation (GDPR) rules. With this effort, U.S. users will be able to download data such as all of their address book contacts, calendar appointments, music streaming preferences and details about past Apple product repairs. Previously, customers could not get their data without contacting Apple directly. Apple launched its online privacy portal in May during which U.S users were allowed only to correct their data or delete their Apple accounts. Apple has also added messages across its apps that tell users how their data is being handled. The company is also rolling out an updated privacy page on its website today detailing what data it does and does not store. Apple says that it does not store much of user’s data, which was confirmed by Zack Whittaker, a security editor at TechCrunch, when he asked Apple for his own data and the company turned over only a few megabytes of spreadsheets, including his order and purchase histories, and marketing information. In his article on ZDNet, Zack says, “The zip file contained mostly Excel spreadsheets, packed with information that Apple stores about me. None of the files contained content information -- like text messages and photos -- but they do contain metadata, like when and who I messaged or called on FaceTime.” He further added, “Any other data that Apple stores is either encrypted — so it can’t turn over — or was only held for a short amount of time and was deleted.” About Apple’s privacy policy updates, it refreshes its privacy pages once a year, a month after its product launches. It first launched its dedicated privacy pages in 2014. A year later, the company blew up the traditional privacy policy in 2015 by going more full-disclosure. Zack says that, since then, Apple’s pages have expanded and continued to be transparent on how the company encrypts user data on its devices. To know more about how Apple encrypts user data in detail, visit Zack’s post on ZDNet. Apple bans Facebook’s VPN app from the App Store for violating its data collection rules Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Apple buys Shazam, and will soon make the app ad-free  
Read more
  • 0
  • 0
  • 11232

article-image-express-gateway-v1-13-0-releases-drops-support-for-node-6
Sugandha Lahoti
18 Oct 2018
2 min read
Save for later

Express Gateway v1.13.0 releases; drops support for Node 6

Sugandha Lahoti
18 Oct 2018
2 min read
Express Gateway v1.13.0 was released yesterday. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. The release 1.13.0 drops support for Node 6. What’s new in this version? Changes The development Dockerfile is updated to better leverage the caching. The COPY statements are included at the very bottom to leverage caching for all the layers. Developers need not manually create work directory, WORKDIR does that automatically. In the Express Gateway v1.13.0, the automated deployment process has been updated to provide updated README to the official Helm chart. The Express Gateway v1.13.0 policy file is updated to be exposed as a set of functions instead of as a class which does not really hold any state nor extended anywhere. It transforms the current policy to be a singleton class to an object which exports 3 functions. This might help people get started in hacking with Express Gateway. They have updated all their dependencies before the minor release. Fixes A lot of new changes have been made in Winston after the 3.0.0 migration. These include A better default log level info which avoids using console.log in production code They have updated all references in the code to use verbose to hide statements that do not matter Added color to log context to differentiate between timestamp, context, level, and message Deprecated different functions that aren't used anywhere but are harming the general test coverage Also, it is now possible to provide raw regular expression to Express Gateway’s CORS policy. This allows cors origin configuration to have regular expressions as values. Read more about the release on Github. Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js API Gateway and its need Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 7864

article-image-deepmind-open-sources-trfl-a-new-library-of-reinforcement-learning-building-blocks
Natasha Mathur
18 Oct 2018
3 min read
Save for later

DeepMind open sources TRFL, a new library of reinforcement learning building blocks

Natasha Mathur
18 Oct 2018
3 min read
The DeepMind team announced yesterday that they’re open sourcing a new library, named TRFL, that comprises useful building blocks for writing reinforcement learning (RL) agents in TensorFlow. The TRFL library was created by the research engineering team at DeepMind. TRFL library is a collection of key algorithmic components that are used for a large number of DeepMind’s agents such as DQN, DDPG, and the Importance Weighted Actor Learner Architecture. A typical deep reinforcement learning agent usually comprises a large number of interacting components that includes the environment and some deep network representing values or policies. Apart from these, these RL agents also include components such as a learned model of the environment, pseudo-reward functions or a replay system. Moreover, these RL agents interact in subtle ways which makes it difficult to identify bugs in large computational graphs. To fix this issue, it is recommended to open-source complete agent implementations. This is because even though the large agent codebases are useful for reproducing research, it is hard to modify and extend them. Additionally, a different and complementary approach is to provide a reliable, well-tested implementation of common building blocks. These implementations can then be used in a variety of different RL agents. TRFL library helps as it includes functions that help implement both classical RL algorithms as well as other cutting-edge techniques. The loss functions and other operations that come with TRFL, are implemented in pure TensorFlow. These RL algorithms are not complete algorithms instead they’re implementations of RL-Specific mathematical operations which are required when building fully-functional RL agents. The DeepMind team also provides TensorFlow ops for value-based reinforcement learning in discrete action spaces such as TD-learning, Sarsa, Q-learning, and their variants. Moreover, it offers ops for implementing continuous control algorithms such as DPG as well as ops for learning distributional value functions. Finally, TRFL also comes with an implementation of the auxiliary pseudo-reward functions used by UNREAL. This improves data efficiency in a wide range of domains. “This is not a one-time release. Since this library is used extensively within DeepMind, we will continue to maintain it as well as add new functionalities over time. We are also eager to receive contributions to the library by the wider RL community”, mentioned the DeepMind team. For more information, check out the official DeepMind blog. Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library
Read more
  • 0
  • 0
  • 14679

article-image-twilio-flex-a-fully-programmable-contact-center-platform-is-now-generally-available
Bhagyashree R
18 Oct 2018
3 min read
Save for later

Twilio Flex, a fully-programmable contact center platform, is now generally available

Bhagyashree R
18 Oct 2018
3 min read
Yesterday, Twilio announced the general availability of Flex. Since its preview announcement in March, Flex has been used by thousands of contact center agents including support and sales teams at Lyft, Scorpion, Shopify, and U-Haul. Twilio Flex is a fully-programmable contact center platform that aims to give businesses complete control over customer engagement. It is a cloud-based platform that provides infinite flexibility in your hands. What functionalities does Flex provide to enterprises? Twilio Flex enables enterprises to do the following: Answer user queries using Autopilot Flex provides a conversational AI platform called Autopilot using which businesses can build custom messaging bots, IVRs, and home assistant apps. These bots are trained with the data pulled by Autopilot using Twilio’s natural language processing engine. Companies can deploy those bots across multiple channels including voice, SMS, Chat Alexa, Slack, and Google Assistant. With these bots, enterprises can also respond to frequently asked questions and if the queries become complex the bots can then transfer the conversation to a human agent. Secure phone payment with Twilio Pay With only one line of code, you can activate the Twilio Pay service that provides businesses the tools needed to process payments over the phone. It relies on secure payment methods such as tokenization to ensure that credit card information is securely handled. Provide a true omnichannel experience Flex gives enterprises access to a number of channels out of the box including voice, SMS, email, chat, video, and Facebook Messenger, among others. Also, agents can switch from channel to channel without losing the conversation or context. Customize user interface programmatically Flex user interfaces are designed with customization in mind. Enterprises can customize the customer-facing components like click-to-call or click-to-chat. It also allows adding entirely new channels or integrating new reporting dashboards to display agent performance or customer satisfaction. Integrate any application Enterprises can integrate their third-party business-critical applications with Flex. These applications may include systems such as customer relationship management (CRM), workforce management (WFM), reporting, analytics, or data stores. Analytics and insights for better customer experience It offers real-time event stream, a supervisor desktop, and admin desktop, which gives supervisors and administrators complete visibility and control over interaction data. Using these analytics and insights they will be able to better monitor and manage an agent’s performance. To know more about Twilio Flex, check out their official announcement. Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers Twilio WhatsApp API: A great tool to reach new businesses Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 2217

article-image-eff-kicks-off-its-coders-rights-project-with-a-paper-on-protecting-security-researchers-rights
Sugandha Lahoti
18 Oct 2018
3 min read
Save for later

EFF kicks off its Coder’s Rights project with a paper on protecting security researchers’ rights

Sugandha Lahoti
18 Oct 2018
3 min read
The Electronic Frontier Foundation is introducing a new Coder’s Rights project to allow programmers and developers to research and develop freely without worrying about facing serious legal challenges that may inhibit their work. With Coder’s Rights project, EFF will protect researchers through education, legal defense, amicus briefs, and involvement in the community. They will also provide policy advice to decision-making officials who are considering new computer crime legislation and treaties. The project seeks to support the right of free expression that lies at the heart of researchers' creations and use of computer code to examine computer systems, and relay their discoveries among their peers and to the wider public. To kick-start this project, EFF has published a whitepaper yesterday, Protecting Security Researchers' Rights in America. This paper aims to provide “legal and policy basis for the Coder’s Rights project, outlining human rights standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights, should use to protect the fundamental rights of security researchers.” According to the paper, “present security researchers work in an environment of legal uncertainty, even as their job becomes more vital to the orderly functioning of society.” Their research paper is based on the rights recognized by the American Convention on Human Rights, and examples from North and South American jurisprudence. It analyzes “what rights security researchers have; how those rights are expressed in the Americas’ unique arrangement of human rights instruments, and how the EFF might best interpret the requirements of human rights law when applied to the domain of computer security research and its practitioners.” Here are the main highlights from the paper: Courts and the law should guarantee that the creation, possession or distribution of tools related to cybersecurity are protected by Article 13 of the American Convention of Human Rights, as legitimate acts of free expression. Lawmakers and judges should discourage the use of criminal law as a response to socially beneficial behavior by security researchers. Cybercrime law should include malicious intent and actual damage in its definition of criminal liability. Criminal liability must be based on laws which describe in a precise manner which conduct is forbidden and which is punishable. Penalties for computer crimes should be proportionate to the harm caused by crimes conducted without the use of a computer. Proactive actions should be taken to secure the free flow of information in the security research community. The white paper is available for download. Read more about the Coder’s Rights project on EFF. Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”. Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee. What the EU Copyright Directive means for developers – and what you can do
Read more
  • 0
  • 0
  • 10376

article-image-llvm-will-be-relicensing-under-apache-2-0-start-of-next-year
Prasad Ramesh
18 Oct 2018
3 min read
Save for later

LLVM will be relicensing under Apache 2.0 start of next year

Prasad Ramesh
18 Oct 2018
3 min read
After efforts since last year, LLVM, the set of compiler building tools is closer towards an Apache 2.0 license. Currently, the project has its own open source licence created by the LLVM team. This is a move to go forward with Apache 2.0 based on the mailing list discussions. Why the shift to Apache 2.0? The current licence is a bit vague and was not very welcoming to contributors and had some patent issues. Hence, they decided to shift to the industry standard Apache 2.0. The new licence was drafted by Heather Meeker, the same lawyer who worked on the Commons Clause. The goals of the relicensing as listed on their website are: Encourage ongoing contributions to LLVM by preserving a low barrier to entry for contributors. Protect users of LLVM code by providing explicit patent protection in the license. Protect contributors to the LLVM project by explicitly scoping their patent contributions with this license. Eliminate the schism between runtime libraries and the rest of the compiler that makes it difficult to move code between them. Ensure that LLVM runtime libraries may be used by other open source and proprietary compilers. The plan to shift LLVM to Apache 2.0 The relicence is not just Apache 2.0, the license header reads “Apache License v2.0 with LLVM Exceptions”. The exceptions are related to compiling source code. To know more about the exceptions follow the mailing list. The team plans to install the new license and the developer policy that references the new and old licenses. At this point, all subsequent contributions will be under both these licenses. They have a two-fold plan to ensure the contributors are aware. They’re going to ask many active contributors (both enterprises and individuals) to explicitly sign an agreement to relicense their contributions. Signing will make the change clear and known while also covering historical contributions. For any other contributors, their commit access will be revoked until the LLVM organization can confirm that they are covered by one of the agreements. The agreements For the plan to work, both individuals and companies need to sign an agreement to relicense. They have built a process for both companies and individuals. Individuals Individuals will have to fill out a form with the necessary information like email addresses, potential employers, etc. to effectively relicense your contributions. The form contains a link to a DocuSign agreement to relicense any of your individual contributions under the new license. Signing the document will make things easier as it will avoid confusion in contributions and if it is covered by some company. The form and agreement is available on Google forms. Companies There is a DocuSign agreement for companies too. Some companies like Argonne National Laboratory and Google have already signed the agreement. There will be no explicit copyright notice as they don’t feel it is worthwhile. The current planned timeline is to install the new developer policy and the new license after LLVM 8.0 release in January 2019. For more details, you can read the mail. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package
Read more
  • 0
  • 0
  • 16743
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gartner-lists-digital-ethics-and-privacy-as-one-of-the-top-10-strategic-technology-trends-for-2019
Bhagyashree R
17 Oct 2018
5 min read
Save for later

Gartner lists ‘Digital Ethics and Privacy’ as one of the top 10 strategic technology trends for 2019

Bhagyashree R
17 Oct 2018
5 min read
Yesterday, Gartner published an analysis listing Top 10 Strategic Technology Trends for 2019. This analysis aims to help enterprise architecture and technology innovation leaders to identify opportunities, counter threats, and create competitive advantage. Among the very obvious rising technologies trends, there is one more trend that has made to the Gartner list, which is digital ethics and privacy. The following are the top 10 strategic trends you need to watch out for in 2019: Source: Gartner 2018 has been the year of data breaches, hacking, and various privacy concerns. We saw many data controversies involving tech giants like Google and Facebook. For instance, last month Facebook witnessed the biggest security breach in which 50 million accounts were compromised. This happened because of a vulnerability in Facebook’s code that existed between July 2017 and September 2018. Later, Facebook clarified that data of 30 million accounts were stolen. “We now know that fewer people were impacted than we originally thought. Of the 50 million people whose access tokens we believed were affected, about 30 million actually had their tokens stolen.” Previously, Facebook was also in news for sharing users’ personal identifying information (PII) with their advertisers. This was a conclusion of a study done by researchers at Northeastern University and Princeton University. Reportedly, Google also had entered into a deal with Mastercard to track whether users’ offline buying habits are influenced by online ads for the past year. Bloomberg reported that Mastercard provided customers’ transaction data to Google and it is likely that other credit card companies are also doing the same. These controversies have made consumers more aware that their personal information is valuable and they should demand control. Users are now becoming increasingly concerned about how their personal information is being used by both the public and private sector. If the organizations do not proactively address these concerns, they will surely witness a backlash. Organizations are also recognizing the need for increased security and better management of user’s personal data. As a result, they are beginning to take actions such as revising their data privacy policy while governments are exploring how best to implement strict legislation to ensure they do, without stifling tech innovation. For example, recently, the Senate Commerce Committee held a hearing with privacy advocates in the US and EU representative, on protecting consumer data privacy: As per the committee's hearing, it focused on the perspective of privacy advocates and other experts. These advocates encouraged federal lawmakers to create strict data protection regulation rules, giving consumers more control over their personal data. The major focus was on implementing a strong common federal consumer privacy bill “that sets a floor, not a ceiling.” Also, major tech giants and social media companies, AT&T, Amazon, Google, Twitter, Apple, and Charter Communications met the US Senate Committee to discuss consumer data privacy. Ethics and privacy: The hierarchy of intent Importance of privacy to an organization should be driven by its ethics and trust. Enterprises should not just be asking “are we compliant?”, rather they should ask “are we doing the right thing?” The following diagram shows Gartner’s hierarchy of intent, which means: “The move from compliance-driven organizations to ethics-driven organizations can be described as the hierarchy of intent.” Source: Gartner Mind compliance Mind compliance is the lowest level in the hierarchy and is only externally driven. Enterprises at this level focus on avoiding issues and make decisions about the use of technology based on what is allowed. Irrespective of whether something is right or wrong, if there is no rule against what is proposed, it is allowed. Mitigating risk This level includes those enterprises who have the mindset of taking the risk of doing harm to others without harming itself. This also assesses the risk of getting caught doing something that leads to public embarrassment and reputational risk. The report says: “Companies that misuse personal data will lose the trust of their customers. Trustworthiness is a key factor in driving revenue and profitability. Building customer trust in an organization is difficult, but losing it is easy. However, organizations that gain and maintain the trust of their customers will thrive. By 2020, we expect that companies that are digitally trustworthy will generate 20% more online profit than those that aren’t.” Making a difference To make a difference for customers, industries or society at large, companies should use ethical considerations. Going by ethics, commercial enterprises will be able to create a competitive differentiation and public sector institutions will be able to create value for citizens. Following your values This means making moral-driven decisions. Your decisions will be based on what your brand values are, what your brand represents, and what your brand permits. The analysis highlights that technology is not for maximizing profit at the expense of customers: “Following your values comes down to being able to look yourself in the mirror and feel convinced you are doing the right thing. Are you treating customers, employees or citizens as you would expect to be treated? The successful use of technology is not to maximize its utility for the organization at the expense of the customer, rather it is to figure out how to get the most value out of it for both the organization and the individuals it depends on.” To read Gartner’s full report, head over to its official website. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers? Google’s Senate testimony, “Combating disinformation campaigns requires efforts from across the industry.”
Read more
  • 0
  • 0
  • 14561

article-image-spammy-bots-most-likely-influenced-fccs-decision-on-net-neutrality-repeal-says-a-new-stanford-study
Melisha Dsouza
17 Oct 2018
4 min read
Save for later

Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study

Melisha Dsouza
17 Oct 2018
4 min read
In December 2017, the Federal Communications Commission voted to kill net neutrality protections. This was done ignoring the overwhelming support of the masses towards safeguarding the open internet. Now, fresh reports have emerged according to a study made by a Stanford University researcher, out of the 22 million comments filed to the agency addressing the move to revoke regulations, nearly 100% of those comments were fake. Assisted by data scientist Jeff Kao;  Ryan Singel, a media and strategy fellow at Stanford, sifted through all submitted comments to present his findings. Using a machine learning program, Kao segregated millions of comments that were fake and duplicated and most certainly taken from form and letter campaigns. He took the 60+GB dataset of comment, and mapped each comment into semantic space vectors. Further, he clustered the comments based on their meaning, which resulted in approximately 150 clusters of comment submissions. In the end, he was left with about 800,000 unique comments. What's surprising is that, out of all those comments only 0.3 percent supported the repeal of net neutrality. The question then arises, on what basis did the FCC decide to repeal net neutrality? Moreover, did they not have a system in place to filter out bot sent comments? The answer to the latter question is a big ‘NO’. The report suggests that the FCC did nothing to prevent comment stuffing and comment fraud. Most of the comments were submitted under false identities using emails belonging to journalists, lawmakers, and dead people. Subsequently,  Kao contacted commenters by emailing them and asking them if they submitted the comment related to their email address. While the responses were varied, users submitting pro-net neutrality comments, confirmed that they did submit the comment. Moreover, after the public had cast its vote, there was no information released to the  public, journalists and policy makers to actually understand what Americans had told the FCC about the repeal of the 2015 Open Internet Order. Ryan’s findings were released on 15th October and first reported by Motherboard. The report, entitled "Filtering Out the Bots: What Americans Actually Told the FCC about Net Neutrality Repeal," points out that Americans were well-informed on the topic of net neutrality. Ryan and Kao further went on to match and sort comments based on geographic areas. They deduced that  646,041 unique comments were matched to Congressional districts. The resulting reports for every district explores the concerns of the citizens over net neutrality. The report also suggests measures for FCC and other government agencies to avoid comment stuffing while making it easy for Americans to participate in nationwide discussions. The report suggests a confirmation email to be sent once a comment is posted by a user. The owner of the email can confirm or deny if they sent the comment. For users without an email-id, comments can be marked as “no email address given.” Comments could then be labeled as “confirmed,” “unconfirmed,” “denied,” “invalid email address,” or “no email address given.” This would aid researchers and policymakers to identify likely fake comments. Fake email id’s can be distributed and registered across the federal agencies to combat comment stuffing. To identify bot-controlled email address, the system could mark every comment with a count of the number of submissions from that particular email address. This will help discard repetitive comments from the same email id. You can download the full Filtering Out the Bots report to explore links to the individual reports for every Congressional district and state. The U.S. Justice Department sues to block the new California Net Neutrality law Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act  
Read more
  • 0
  • 0
  • 10050

article-image-youtube-went-down-twitter-flooded-with-deep-questions-youtube-back-and-everyone-is-back-to-watching-cat-videos
Natasha Mathur
17 Oct 2018
2 min read
Save for later

YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos

Natasha Mathur
17 Oct 2018
2 min read
YouTube shocked the internet world yesterday evening (Pacific time) when it faced a worldwide outage issue. The disruption lasted well for over half an hour (although it felt like a lifetime) for users worldwide. Users were unable to log in, upload or watch any of the videos as it showed a message saying “500 Internal Server Error:  a team of highly trained monkeys has been dispatched to deal with this situation”. Although the cause of the outage isn’t clear yet, the YT team was well aware of the issue and took Twitter to update users on the issue: https://twitter.com/TeamYouTube/status/1052373937839980544 Last time YouTube experienced a service outage was back in July during the World Cup game between Croatia and England and once in May when YT TV faced a similar disruption during the NBA Eastern Conference Finals. Once yesterday’s issue got resolved, YouTube notified the users about the same on Twitter: https://twitter.com/TeamYouTube/status/1052393799815589889 While YT was down, users all around the world found themselves stuck with a question “what should I do with my life now?”. Twitter was just the perfect platform for passing time with everyone stepped up their meme game, venting their frustration over the outage. Here are some of the best tweets by users expressing their feelings about their temporary exile from YouTube: https://twitter.com/Io_Kobato/status/1052415691507281922 https://twitter.com/mxhmdhanna/status/1052381281260978178 https://twitter.com/UnboxTherapy/status/1052381713559633920 https://twitter.com/_Spotteh_/status/1052388888940109825 https://twitter.com/Jack_Septic_Eye/status/1052372583637766144 https://twitter.com/PhillyPolice/status/1052371210384891904   https://twitter.com/babyboihc/status/1052426241318576129 But, the meme game didn’t stop just there. Memes continued to roll out even after YT was up and running. We’re guessing it was people’s way of recuperating from the exile. https://twitter.com/itz_lunaaaa/status/1052390151035310080 https://twitter.com/heykittygorl/status/1052390034958155778 https://twitter.com/dreeealugo/status/1052389757253246978 YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law YouTube has a $25 million plan to counter fake news and misinformation Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 13142

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 15402
article-image-developers-rejoice-github-announces-github-actions-github-connect-and-much-more-to-improve-development-workflows
Melisha Dsouza
17 Oct 2018
5 min read
Save for later

Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows

Melisha Dsouza
17 Oct 2018
5 min read
Yesterday, at the GitHub Universe annual developer conference held at San Francisco, the team announced a host of new changes to help developers manage and improve their development workflow. GitHub has been used by 31 million developers in the past year and is the most trusted code hosting platform. It received numerous support from developers all over the globe and the team has decided to appreciate this support by making life easier for developers. Their new upgrades include: Github Actions will help developers automate workflows and build while sharing and executing code inside containers on GitHub. GitHub Connect for facilitating a unified business identity, unified search, and unified contributions. Powerful security tools with the GitHub Security Advisory API Improvements to the GitHub learning lab Let’s look at these updates in depth: #1 GitHub Actions "A lot of the major clouds have built products for sysadmins and not really for developers, and we want to hand power and flexibility back to the developer and give them the opportunity to pick the tools they want, configure them seamlessly, and then stand on the shoulders of the giants in the community around them on the GitHub platform" -GitHub head of platform Sam Lambert ( in an interview to Venture Beat) Software development demands that a project is broken down into hundreds, if not thousands of small steps (depending on the scope of the project) to get the job done faster and efficiently. This means that at every stage of development, teams need to coordinate to understand the progress of each step. Teams need to work concurrently and ensure that their actions don’t overlap or overwrite changes made by other members. Many companies perform these checks manually, using different development tools which takes up a lot of time and effort. Enter Github Actions. This new feature uses code packaged in a Docker container running on GitHub’s servers. Users can set up triggers for events. For instance, introducing new code to a project or packaging an NPM module or sending an SMS  alert. This trigger will set off Actions to take further steps defined by criteria set by administrators. Besides automating tasks, GitHub Actions allows users to connect and share containers to run their software development workflow. They can easily build, package, release, update, and deploy their project in any language, without having to run code themselves. Developer, Team, and Business Cloud plans can use Actions that are available in limited public beta on GitHub. #2 Github Connect "GitHub Connect begins to break down organizational barriers, unify the experience across deployment types, and bring the power of the world’s largest open-source community to developers at work." -Jason Warner, GitHub’s senior vice president of technology. The team has announced that GitHub Connect is now generally available. GitHub Connect comes with new features like unified search, unified business identity, and unified collaborations.  Unified search can search through both the open source code on the site as well as internal code. When searching from GitHub Enterprise instance, users can view search results from public content on GitHub.com The Unified Business Identity feature allows administrators to easily manage user accounts existing across separate Business Cloud installations. Using a single back-end interface, businesses can improve billing, licensing, permissions and policy operations. Many developers come across the issue wherein their contributions are locked behind the firewalls of private companies. Unified contributions, lets developers get credit for the work they’ve done on repositories for businesses in the past. #3 Better Security The new GitHub Security Advisory API, automates vulnerability scans and makes it easier for developers to find threats in their code. GitHub Vulnerability Alert now supports .NET and Java and developers who use these languages will get a heads-up if any dependent code has a security exploit. GitHub will now also start scanning all public repositories for known token formats and developers who accidentally put their security tokens into public code can be at rest. On finding a known token, the team will alert the token provider to validate the commit and contact the account owner to issue a new token. From automating detection and remediation to tracking emergent security vulnerabilities, looks like the team is going all out to improve its security functionalities! #4 The GitHub Learning Lab GitHub Learning Lab helps developers get started with GitHub, manage merge conflicts, contribute to their first open source project, and more. The team announced three new Learning Lab courses-  covering secure development workflows with GitHub, reviewing a pull request, and getting started with GitHub Apps. These courses will be made available to everyone. Developers can create private courses and learning paths, customize course content, and access administrative reports and metrics with the Learning lab. The announcements have caused a buzz among developers on Twitter: https://twitter.com/fatih/status/1052238735755173888 https://twitter.com/sarah_edo/status/1052247186220568577 https://twitter.com/jmsaucier/status/1052322249372590081 It would be interesting to see how these updates shape the use of GitHub in the future. To know more about the announcement, head over to GitHub’s official Blog. GitHub is bringing back Game Off, its sixth annual game building competition, in November RawGit, the project that made sharing and testing code on GitHub easy, is shutting down! GitHub comes to your code Editor; GitHub security alerts now have machine intelligence  
Read more
  • 0
  • 0
  • 13309

article-image-google-cloud-announces-new-go-1-11-runtime-for-app-engine
Bhagyashree R
17 Oct 2018
2 min read
Save for later

Google Cloud announces new Go 1.11 runtime for App Engine

Bhagyashree R
17 Oct 2018
2 min read
Yesterday, Google Cloud announced a new Go 1.11 runtime for the App Engine standard environment. This provides all the benefits of App Engine such as paying only for what you use, automatic scaling and managed infrastructure. Starting with Go 1.11, which was launched in August this year, Go on App Engine has no limits on application structure, supported packages, context.Context values, or HTTP clients. What are the changes in the Go 1.11 runtime as compared to Go 1.9? 1. Now, you can specify the Go 1.11 runtime in your app.yaml file by adding the following line: runtime: go111 2. Each of your services must include a package main statement in at least one source file. 3. The appengine build tag is now deprecated and will no longer be used when building an app for deployment. 4. The way you import dependencies has changed. You can specify the dependencies in this runtime by the following two ways: Putting your application and related code in your GOPATH. Or else, by creating a go.mod file to define your module. 5. Google App Engine now does not modify the Go toolchain to include the appengine package. Using Google Cloud client library or third party libraries instead of the App Engine-specific APIs is recommended. 6. You can deploy services that use the Go 1.11 runtime using the gcloud app deploy command. You can still use the appcfg.py commands the Go 1.9 runtime, but the gcloud command-line tool is preferred. This release of the Go 1.11 runtime in the App Engine uses the latest stable release of Go 1.11 and will automatically update to new minor versions upon deployment but will not for any major versions. Also, it is currently in beta and might be changed in backward-incompatible ways in future. You can read more about Go 1.11 runtime on The Go Blog and also the documentation published by Google. Golang plans to add a core implementation of an internal language server protocol Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 11120

article-image-mongodb-switches-to-server-side-public-license-sspl-to-prevent-cloud-providers-from-exploiting-its-open-source-code
Natasha Mathur
17 Oct 2018
3 min read
Save for later

MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code

Natasha Mathur
17 Oct 2018
3 min read
MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code MongoDB, a leading free, and open source general purpose database platform, announced yesterday that it has issued a new software license, the Server Side Public License (SSPL), for the MongoDB community server. This new license will be applied to all the new releases and versions of the MongoDB community server, including the patch fixes for prior versions. “The market is increasingly consuming software as a service, creating an incredible opportunity to foster a new wave of great open source server-side software. Unfortunately, once an open source project becomes interesting, it is too easy for cloud vendors who have not developed the software to capture all of the value while contributing little back to the community,” mentioned Eliot Horowitz, CTO, and co-founder, MongoDB. Earlier, MongoDB was licensed under the GNU AGPLv3 (AGPL). This license allowed the companies to modify and run MongoDB as a publicly available service but only if they open source their software or acquire a commercial license from MongoDB. However, as the popularity of MongoDB grew, some cloud providers started taking MongoDB’s open-source code to offer a hosted commercial version of its database to their users without abiding by the open-source rules. This is why MongoDB decided to switch to the SSPL. “We have greatly contributed to, and benefited from, open source, and are in a unique position to lead on an issue impacting many organizations. We hope this new license will help inspire more projects and protect open source innovation”, said Horowitz. The SSPL is not very different from the AGPL license. Only that SSPL clearly specified the condition for providing open source software as a service. In fact, the new license offers the same level of freedom as the AGPL to the open source community. Companies still have the freedom to use, review, modify and redistribute the software but to use MongoDB as a service, they need to open source the software that they’re using. This is not applicable to customers who have purchased a commercial license from MongoDB. “We are big believers in open source. It leads to more valuable, robust and secure software. However, it is important that open source licenses evolve to keep pace with the changes in our industry. With the added protection of the SSPL, we can continue to invest in R&D and further drive innovation and value for the community”, mentioned Dev Ittycheria, President & CEO, MongoDB. For more information, check out the official MongoDB announcement. MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 20576
article-image-google-to-charge-a-licensing-fee-from-android-based-smartphone-vendors-in-the-eu-to-use-its-chrome-search-and-other-apps
Sugandha Lahoti
17 Oct 2018
3 min read
Save for later

Google to charge a licensing fee from Android-based smartphone vendors in the EU to use its Chrome, Search and other apps

Sugandha Lahoti
17 Oct 2018
3 min read
In compliance with the EU regulations set up when Google was fined $5 Million in July, for breaching EU antitrust laws, Google has made major changes to its licensing policy. Last week, Google filed its appeal against the Commission’s decision at the General Court of the European Union. While the appeal is pending, they have now informed the European Commission of the changes made in their policy. Read More: A quick look at E.U.’s antitrust case against Google’s Android Google is splitting the license for its standard apps from Chrome and the official search app. This means that if a company wants to use some of Google apps (Gmail and YouTube for instance) but also use other competing apps (Bing for search and Firefox for browsing), they can do so in the European Economic Area (EEA), albeit with an associated cost. Hiroshi Lockheimer, Senior Vice President, Platforms & Ecosystems, mentioned in the Google blog, “Since the pre-installation of Google Search and Chrome together with our other apps helped us fund the development and free distribution of Android, we will introduce a new paid licensing agreement for smartphones and tablets shipped into the EEA. He also points out that, “Android will remain free and open source.” Google will also offer separate licenses to the Google Search app and to Chrome. They will also provide commercial agreements to partners for the non-exclusive pre-installation and placement of Google Search and Chrome, although Google noted that pre-installed competition was already possible. Android vendors in the EEA are also allowed to make forked versions of Android while still distributing Google apps. All the licensing changes will be effective for devices launched after October 29th. It is also likely that companies in Europe that would have to pay for Google apps, may pass the licensing fee to consumers in the form of higher device prices. Lockheimer added that they will continue with their commitment to the Android ecosystem saying, “We’ll be working closely with our Android partners in the coming weeks and months to transition to the new agreements. And of course, we remain deeply committed to continued innovation for the Android ecosystem.” Read the official announcement on the Google blog. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices. Google reveals an undisclosed bug that left 500K Google+ accounts vulnerable in early 2018; plans to sunset Google+ consumer version
Read more
  • 0
  • 0
  • 2515

article-image-jeff-bezos-amazon-will-continue-to-support-u-s-defense-department
Richard Gall
16 Oct 2018
2 min read
Save for later

Jeff Bezos: Amazon will continue to support U.S. Defense Department

Richard Gall
16 Oct 2018
2 min read
Just days after Google announced that it was pulling out of the race to win the $10 billion JEDI contract from the Pentagon, Amazon's Jeff Bezos has stated that Amazon will continue to support Pentagon and Defense projects. But Bezos went further, criticising tech companies that don't work with the military. Speaking at Wired25 Conference, the Amazon chief said "if big tech companies are going to turn their back on U.S. Department of Defense (DoD), this country is going to be in trouble... One of the jobs of senior leadership is to make the right decision, even when it’s unpopular." Bezos remains unfazed by criticism It would seem that Bezos isn't fazed by criticism that other companies have faced. Google explained its withdrawal by saying "we couldn’t be assured that it would align with our AI Principles." However, it's likely that the significant internal debate about the ethical uses of AI, as well as a wave of protests against Project Maven earlier in the year were critical components in the final decision. Microsoft remains in the running for the JEDI contract, but there appears to be much more internal conflict over the issue. Anonymous Microsoft employees have, for example, published an open letter to senior management on Medium. The letter states: "What are Microsoft's AI Principles, especially regarding the violent application of powerful A.I. technology? How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?" Clearly, Jeff Bezos isn't too worried about upsetting his employees. Perhaps the story says something about the difference in the corporate structure of these huge companies. While they all have high-profile management teams, its only at Amazon that the single figure of Bezos reigns supreme in the spotlight. With Blue Origin he's got his sights set on something far beyond ethical decision making - sending humans into space. Cynics might even say it's the logical extension of the implicit imperialism of his enthusiasm for Pentagon.
Read more
  • 0
  • 0
  • 13391
Modal Close icon
Modal Close icon