Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-facebook-plans-to-invest-300-million-to-support-local-journalism
Sugandha Lahoti
16 Jan 2019
4 min read
Save for later

Facebook plans to invest $300 million to support local journalism

Sugandha Lahoti
16 Jan 2019
4 min read
On Tuesday, Facebook announced that they will be investing $300 million over a period of 3 years to support local journalism including news programs, partnerships, and content. Facebook has been facing criticisms for spreading fake news and misinformation on its platform and its poor decisions on data and privacy controls. Possibly investing a large amount in a news-based initiative is a way to redeem their image. "People want more local news, and local newsrooms are looking for more support," Campbell Brown, Facebook's vice president in charge of global news partnerships, said in a statement. "That's why today we're announcing an expanded effort around local news in the years ahead. We’re going to continue fighting fake news, misinformation, and low-quality news on Facebook,”, he added. Facebook says that the project is an expansion of Facebook’s previously launched accelerator program for metro newspapers to help with their digital subscription business. The $300 million funding will support local journalists in news-gathering and building sustainable long-term business models. Per a report by Axios, one-third of the money from the program has already been provided to local news non-profits and programs, and Facebook's own local news initiatives. Almost $5 million grant will be given to Pulitzer Center to launch a fund that will support 12 local newsrooms with local in-depth, multimedia reporting projects and an additional $5 million matching gift. $2 million will be provided to Report for America to help place 1,000 journalists in local newsrooms across America over the next five years. Other recipients of the investments include Knight-Lenfest Local News Transformation Fund, the Local Media Association and Local Media Consortium, the American Journalism Project and the Community News Project. Last year, Google also started its own journalism initiative called the Google News Initiative (GNI), investing over $300 million to support the news industry’s biggest needs. Netizens have mixed sentiment for this initiative. Jim Friedlich, Executive Director and CEO of The Lenfest Institute for Journalism, says Facebook and local news are "co-dependent" and calls the investments from Facebook "a sincere effort to help the local news business”. Fran Wills, CEO of the Local Media Consortium, supported the initiative saying, “Facebook is making this investment to help support local media companies, open up new revenue streams that will support local journalism,” However, criticisms were also in huge numbers. Nikki Usher, a George Washington University professor of media studies, said the effort “is a bit of smoke and mirrors because it’s hard to tell what’s really local for Facebook”. Facebook’s effort is “a lot of money in one sense but in another sense it’s not that much, the equivalent of revenues of one large newspaper”, said Dan Kennedy, a journalism professor at Northeastern University. A hacker news user said, “This play to 'local news' is simply a tool to advance their own agenda, they want to own local news and help you feel that FB is all warm and 'local' to your needs.” Another said, “As someone who has been running a local newspaper for the last four years in a town, my trust in Facebook is exactly zero. They have practically monopolized news distribution, helped to destroy the business model of important social service and now they would like to make up for it by giving a fraction of the money back?” https://twitter.com/AnandWrites/status/1085198548717760512 Only time will tell, if news companies welcome these contributions by Facebook or it will still be difficult for the social media platform to resolve the relationships it has with publishers, and tech companies. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Facebook is reportedly rating users on how trustworthy they are at flagging fake news Four 2018 Facebook patents to battle fake news and improve news feed
Read more
  • 0
  • 0
  • 10896

article-image-brave-introduces-brave-ads-that-share-70-revenue-with-users-for-viewing-ads
Bhagyashree R
16 Jan 2019
2 min read
Save for later

Brave introduces Brave Ads that share 70% revenue with users for viewing ads

Bhagyashree R
16 Jan 2019
2 min read
Yesterday, the team behind Brave, a privacy-focused browser, announced that they are previewing their upcoming digital advertising model, Brave Ads, in Brave’s Developer channel. Brave Ads will feature in the upcoming Brave 1.0 release through which users will receive 70% of the gross ad revenue. This advertising model is opt-in and does not replace ads on websites. Users can decide how many ads they would like to see. Currently, the Brave Beta version does not include advertiser confirmation or user payment for ad views. In the coming weeks, the team will be rolling out updates to allow users to earn BAT (Basic Attention Tokens) for viewing ads. How Brave Ads work? Those users who choose to see Brave Ads are notified about the offers as they browse the web. Once they agree to engage with these notifications, they are presented with a full-page ad in a private ad tab. The Brave team mentions that this feature will ensure that user privacy is not compromised and does not leak user’s personal data from their device, “Unlike conventional digital ads, ad matching happens directly on the user’s device, so a user’s data is never sent to anyone, including Brave. Accessing user attention no longer entails large scale user data collection.” Users will get the reward in the form of BAT via the integrated Brave Rewards in their browser. They can donate their earned BAT on a monthly basis to their favorite sites or use it as a tip for content creators. This model will be extended to allow BAT’s usage for premium content, services, or withdraw it from their wallets. Once the confirmations become available, users will be paid at the end of each calendar month. Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart Chrome 72 Beta releases with public class fields, user activation, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 14187

article-image-ethereum-community-postpones-constantinople-post-vulnerability-detection-from-chainsecurity
Savia Lobo
16 Jan 2019
2 min read
Save for later

Ethereum community postpones Constantinople, post vulnerability detection from ChainSecurity

Savia Lobo
16 Jan 2019
2 min read
The Ethereum developers announced yesterday that they are pulling back the Constantinople Hard Fork Upgrade after a vulnerability that could allow hackers to steal users’ funds was reported. This upgrade was scheduled to launch today, January 16th. This issue, known as the ‘reentrancy attack’ in the Ethereum Improvement Proposal (EIP) 1283. was identified by a smart contract audit firm ChainSecurity. They also reported about the bug in detail in a Medium blog post yesterday. According to the Ethereum official blog, “Security researchers like ChainSecurity and TrailOfBits ran (and are still running) analysis across the entire blockchain. They did not find any cases of this vulnerability in the wild. However, there is still a non-zero risk that some contracts could be affected.” According to a statement by Ethereum Core Developers and the Ethereum Security Community, “Because the risk is non-zero and the amount of time required to determine the risk with confidence is longer the amount of time available before the planned Constantinople upgrade, a decision was reached to postpone the fork out of an abundance of caution.” The blog posted by ChainSecurity explained the cause of the potential vulnerability and have also suggested how smart contracts can be tested for vulnerabilities. The blog highlighted that the EIP-1283 introduces cheaper gas cost for SSTORE operations. If the upgrade took place, the smart contracts on the chain could have utilized code patterns that would make them vulnerable to re-entrancy attack. However, these smart contracts would not have been vulnerable before the attack. Afri Schoedon, the hard fork coordinator at Ethereum said, “We will decide (sic) further steps on Friday in the all-core-devs call. For now it will not happen this week. Stay tuned for instructions.” To know more about this news in detail, visit the Ethereum official blog. Ethereum classic suffered a 51% attack; developers deny, state a new ASIC card was tested Ethereum’s 1000x Scalability Upgrade ‘Serenity’ is coming with better speed and security: Vitalik Buterin at Devcon Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber
Read more
  • 0
  • 0
  • 2217

article-image-pwn2own-vancouver-2019-targets-include-tesla-model-3-oracle-google-apple-microsoft-and-more
Melisha Dsouza
16 Jan 2019
4 min read
Save for later

Pwn2Own Vancouver 2019: Targets include Tesla Model 3, Oracle, Google, Apple, Microsoft, and more!

Melisha Dsouza
16 Jan 2019
4 min read
Pwn2Own, run by Trend Micro’s Zero Day Initiative, is one of the industry’s toughest hacking contests. Started in 2007, Pwn2Own has become a platform for white hats to test their skills against various types of software and winners have been awarded more than $4 million over the lifetime of the program. Pwn2Own Vancouver- Pwn2Own’s spring vulnerability research competition- will be conducted from March 20 to 22 at the CanSecWest conference. The contest has 5 categories, including web browsers, virtualization software, enterprise applications and server-side software. For the first time, the contest will feature an ‘Automotive’ category with the Tesla Model 3 chosen as a target by ZDI. Other targets include software products from Apple, Google, Microsoft, Mozilla, Oracle and VMware. Let’s look into what's in store for every category: #1 Automotive category: Tesla Model 3 “We develop our cars with the highest standards of safety in every respect, and our work with the security research community is invaluable to us” -David Lau, Vice President of Vehicle Software at Tesla Tesla has long involved itself with the hacker community since involvement since 2004 with its bug bounty program, that pays up to $15,000 for security exploits of its systems. In 2018 the company altered its warranty policy. The updated policy states that ‘as long as security exploits are found and reported within the limits outlined by the bug bounty program, the user's warranty will remain intact.’ At Pwn2Own Vancouver, researchers will have 6 focal points to discover/ research vulnerabilities in the car. While prizes for every category vary from $35,000 to $300,000, the winning security researcher can walk away with their very own Model 3. Tesla’s line of action is an indication of its seriousness towards the security of its self-driving cars. #2 Virtualization Category The targets for virtualization category includes: Oracle VirtualBox VMware Workstation VMware ESXi Microsoft Hyper-V Client Microsoft leads the virtualization category with a $250,000 award for a successful Hyper-V Client guest-to-host escalation. VMware is a Pwn2Own sponsor for 2019, and the VMware ESXi along with VMware Workstation will serve as targets with awards of $150,000 and $70,000 respectively. Oracle VirtualBox is included in this category with a prize of $35,000. #3 Browser Category Within the browser category, we have: Google Chrome Microsoft Edge Apple Safari Mozilla Firefox We have seen a lot of web browsers getting hacked in 2018. It is great to see the biggest names in the tech industry coming forward to find vulnerabilities in their systems which can be saved from being exploited by malicious actors. A browser exploit for Firefox will be awarded $40,000. The award for exploiting Chrome is $80,000. Additionally, a contestant exploiting Edge with a Windows Defender Application Guard (WDAG) will be awarded with $80,000. Contestants exploiting Safari will be awarded $55,000 up to $65,000. #4 Enterprise Application Category The Enterprise Application Category has the following targets: Adobe Reader Microsoft Office 365 Microsoft Outlook The products offered by Adobe and Microsoft are used by almost everyone on a daily basis. Finding out a security flaw in this category would therefore safeguard the millions using these products regularly.  A reader exploit will be awarded with $40,000, breaking into office is awarded at $60,000 and $100,000 for Outlook. #5 Server side Category The final category in this contest includes Microsoft Windows RDP as a target. A successful RDP exploit will award the contestant with $150,000. You can head over to Zero Day Initiatives official blog for more information on the contest, the rules, awards and much more. Microsoft urgently releases Out-of-Band patch for an active Internet Explorer remote code execution zero-day vulnerability NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell  
Read more
  • 0
  • 0
  • 10271

article-image-tech-jobs-dominate-linkedins-most-promising-jobs-in-2019
Amrata Joshi
15 Jan 2019
7 min read
Save for later

Tech jobs dominate LinkedIn’s most promising jobs in 2019

Amrata Joshi
15 Jan 2019
7 min read
Last week LinkedIn shared the list of this year’s most promising jobs based on the LinkedIn data. The below listed positions that come with high salaries and a significant number of job openings is a brief summary of LinkedIn’s findings. 1. Data Scientist Data Science combines data analysis, machine learning, and statistics to gain an understanding and evaluate data and there is a rise in trend in this field since past few years. Data scientists are responsible for taking a large amount of data points, both structured and unstructured, and further cleaning and organizing them. As per a report by Gartner, machine learning based intelligent systems will be leading in the technology race through 2020. The median salary of a data scientist is $130,000. The yearly growth in the jobs for this role is approximately 56% which is more than 4000 job openings. Applicants needs to be thorough with Data Science, Data Mining, Data Analysis, Python, Machine Learning. 2. Site Reliability Engineer Site Reliability Engineering is a discipline that involves implementation of software engineering on problems related to IT operations for creating highly reliable softwares. An SRE is expected to handle on-call and emergency support to ensure that the software has good logging and diagnostics. The SRE concept has been adopted by major companies, including Dropbox, Airbnb, Netflix and many more. Indeed, a job listings site updates hundreds of SRE positions. The median salary of an SRE is $200,000. The yearly growth in the jobs for this role is approximately 72% which is more than 1400 job openings. This role might prove to be as popular as the ‘data scientist’ role. Top skills required are Linux, Software Development, Python, Cloud Computing and SQL. 3. Enterprise Account Executive Enterprise account executives are responsible for managing business relationships and contacts with the organization’s larger customers and developing a strategic plan for maximizing sales opportunities. The median salary of an enterprise account executive is $182,000. The year on year growth in the jobs for this role is approximately 62% which is more than 1000 job openings. Key skills required are Salesforce, Cloud Computing, Solution Selling, Software-as-a-Service, Sales Management. 4. Product Designer Product designers use their technical knowledge and design skills for improving the existing products and produce them at a lower cost. The year on year growth in the jobs for this role is 86% reflecting more than 2000 job openings. The median salary for this role is $121,500. Key skills required for this role are product design, User Experience (UX), User Interface Design, Graphic Design, and Adobe Photoshop. 5. Product Owner The product owner is responsible for delivering sprint demos to key stakeholders and is a part of Scrum. The year on year growth in the jobs for this role is 87% reflecting more than 1,100 job openings. The median salary for this role is $101,000. Top skills required for this role are Business Analytics, Agile Methodologies, Business Process Improvement, Scrum. 6. Customer Success Manager Customer Success Management is an integration of functions of marketing, sales, professional services, training and support for maximizing customer and company sustainable proven value. The median salary for this role is $88,500. The year on year growth in the jobs for this role is 80% reflecting more than 2000 job openings. Top Skills involved for this role include Customer Relationship Management, Salesforce, Software-as-a-Service, Customer Satisfaction and Cloud Computing. 7. Engagement Manager Engagement Manager is responsible for assisting customers in using services throughout projects, helping solve client problems and managing financial aspects of contracts. The median salary for this role is $130,000. The year on year growth in the jobs for this role is 43% reflecting more than 1000 job openings. Skills required for this role include Program Management, Business Analysis, Business Process Improvement, Analytics, Customer Relationship Management. 8. Solutions Architect Solution architecture involves designing and managing the solution with regards to specific business problems. A solution architect needs to be more senior and more mature than a computer programmer. Julie Tessaro, an employee at Cloud Academy says, “AWS skills in architecting are quite high in demand and this trend will increase in the years to come as more and more companies will migrate to the cloud.” The median salary for this role is $139,000. The yearly growth in these jobs is 47% reflecting more than 5,800 job openings. Skills required for this role include, Solutions Architecture, Cloud Computing, Software Development, SQL and Software Development Lifecycle. 9. Information Technology Lead An Information Technology Lead is responsible for overall planning and execution of all IT functions. This role demands the employee to meet customer requirements and support and maintenance of existing infrastructure and applications. The median salary for this role is $121,000. The year on year growth in jobs for this role is 141% reflecting more than 1400 job openings. Skills required for this role include, Information Technology, Technical Support, Business Process Improvement, Business Analysis and Troubleshooting. 10. Scrum Master A Scrum master is the facilitator for the agile development team. Since 2017, this role has been in demand and is still continuing the trend. The median salary for this role is $103,000. The year on year growth in the jobs for this role is 67% reflecting more than 2000 job openings. Skills required for this role include,Scrum, Agile Methodologies, Software Development, Business Analysis, Software Development Lifecycle. 11. Cloud Architect A cloud architect is responsible for overseeing a company's cloud computing strategy. The median salary for this role is $155,00. The yearly growth in the jobs for this role is 88% reflecting more than 1,700 job openings. Skills required for this role include,Cloud Computing, Software Development, Amazon Web Services, Solution Architecture, Linux. 12. Product Marketing Manager The product marketing manager is expected to work on the strategy behind the product roadmap and must work with engineering team to build products. The median salary for this role is $134,000. The year on year growth in the jobs for this role is 30% reflecting more than 1,891 job openings. The applicants need to be thorough with Product Marketing, Product Management, Digital Marketing, Cross-functional Team Leadership, Product Development. 13. Solutions Consultant The solution consultants are responsible for testing and ensuring that the solution performs as designed. The median salary for this role is $110,000. The year on year growth in the jobs for this role is 73% reflecting more than 1,126 job openings. The applicants need to be thorough with  Cloud Computing, Enterprise Software, Customer Relationship Management, Software-as-a-Service, Business Analysis. 14. Product Manager A product manager is responsible for guiding the success of a product and leading the team responsible for improving it. The median salary for this role is $121,000. The yearly growth in the jobs for this role is 29% reflecting more than 10,268 job openings. The applicants need to be thorough with Product Management, Product Development, Cross-Functional Team Leadership, Engineering, Product Marketing. 15. Machine Learning Engineer According to Forbes, 61% of organizations most frequently picked Machine Learning / Artificial Intelligence as their company’s data initiative for this year. A machine learning engineer is responsible for designing and developing machine learning and deep learning systems. The median salary for this role is $182,000. The year on year growth in the jobs for this role is 96% reflecting more than 709 job openings. The applicants need to be thorough with Machine Learning, Python, Data Mining, Artificial Intelligence, Data Science. Hope this list by LinkedIn helps you in shortlisting your job and developing skills for the required position. Watch out this space for more news and information. LinkedIn used email addresses of 18M non-members to buy targeted ads on Facebook, reveals a report by DPC, Ireland Creator-Side Optimization: How LinkedIn’s new feed model helps small creators Customize your LinkedIn profile headline
Read more
  • 0
  • 0
  • 11408

article-image-itif-along-with-google-amazon-and-facebook-bargain-on-data-privacy-rules-in-the-u-s
Savia Lobo
15 Jan 2019
2 min read
Save for later

ITIF along with Google, Amazon, and Facebook bargain on Data Privacy rules in the U.S.

Savia Lobo
15 Jan 2019
2 min read
Yesterday, The Information Technology and Innovation Foundation (ITIF), supported by Google, Amazon, and Facebook, proposed ‘A Grand Bargain on Data Privacy Legislation for America’ along with the lawmakers. According to The Verge, the proposal states that “any new federal data privacy bill should preempt state privacy laws and repeal the sector-specific federal ones entirely.” The proposal highlights a few basic characteristics such as requiring more transparency, data interoperability, and users to opt into the collection of sensitive personal data. “All 50 states have their own laws when it comes to notifying users after a data breach, and ITIF asks for a single breach standard in order to simplify compliance. It also calls to expand the Federal Trade Commission’s authority to fine companies that violate the data privacy law, something industry leaders have asked for in the past”, the Verge reports. The proposal would additionally preempt state laws like California’s new privacy act while revoking other federal privacy legislation laws, which include Health Insurance Portability and Accountability Act (HIPAA) and Family Educational Rights and Privacy Act (FERPA). Alan McQuinn, the ITIF senior policy analyst, said, “Privacy regulations aren’t free—they create costs for consumers and businesses, and if done badly, they could undermine the thriving U.S. digital economy. Any data privacy regulations should create rules that facilitate data collection, use, and sharing while also empowering consumers to make informed choices about their data privacy”. Sen. Richard Blumenthal (D-CT) said, “This proposal would protect no one – it is only a grand bargain for the companies who regularly exploit consumer data for private gain and seek to evade transparency and accountability.” He also said that the proposal simply highlights the fact that Big tech cannot be trusted to write their own rules. To know more about this in detail, visit The Verge. ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking The US to invest over $1B in quantum computing, President Trump signs a law The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal
Read more
  • 0
  • 0
  • 9821
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-orgkit-an-all-in-one-tool-to-start-a-company-on-microsoft-tools
Prasad Ramesh
15 Jan 2019
2 min read
Save for later

Introducing OrgKit: An all in one tool to start a company on Microsoft tools

Prasad Ramesh
15 Jan 2019
2 min read
Last week, security expert, SwiftOnSecurity introduced OrgKit on Twitter. It is a new way to run a complete and configured company or business across Microsoft Active Directory, Group Policy, Azure Active Directory, and Office 365. Why was OrgKit created? The whole Microsoft ecosystem was designed to be customized per-organization as per needs. That is why a complete repository of Microsoft product configuration guidance, documents for organizations is so rare. Majority of organizations are unequipped to understand what this really means. There is a diverse configuration history among companies that use these Microsoft services. With this comes the need to support these many different configuration types. This prevented Microsoft from providing generic defaults and guidance for setting things up. What is OrgKit for? It is designed to provide users with a series of templates that can set up a new well-documented IT environment ideally for a mid-size organization. It can serve as a public example of what's possible, and allow companies to make informed decisions. These companies are mostly the ones who lack the security knowledge or are not aware as to what other businesses are doing. This is meant for a company that has to start-over after a full network compromise, or creating a new subsidiary business. Usage of Powershell DSC To build and maintain a Windows environment having a centralized design to support all the necessary tools, Powershell DSC is the ideal tool. It provides a good set of abilities and most likely will be a part of the future versions of OrgKit. Currently, OrgKit aims to help Windows administrators who are already dealing with and trying to cope up with many new technologies and concepts. They need to run the system long term with other employees. Powershell DSC is considered to be a specialized skill that can revert actions done outside its own central control. Using it requires buy-in of the whole-organization. Hence, the kind of use-cases for OrgKit cannot depend on it. To check out the repository, head over to GitHub. How 3 glitches in Azure Active Directory MFA caused a 14-hour long multi-factor authentication outage in Office 365, Azure and Dynamics services Microsoft announces official support for Windows 10 to build 64-bit ARM apps A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report
Read more
  • 0
  • 0
  • 2142

article-image-tensorflow-2-0-to-be-released-soon-with-eager-execution-removal-of-redundant-apis-tf-function-and-more
Amrata Joshi
15 Jan 2019
3 min read
Save for later

TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more

Amrata Joshi
15 Jan 2019
3 min read
Just two months ago Google’s TensorFlow, one of the most popular machine learning platforms celebrated its third birthday. Last year in August, Martin Wicke, engineer at Google, posted the list of what’s expected in TensorFlow 2.0, an open source machine learning framework, on the Google group. The key features listed by him include: This release will come with eager execution. This release will feature more platforms and languages along with improved compatibility. The deprecated APIs will be removed. Duplications will be reduced. https://twitter.com/aureliengeron/status/1030091835098771457 The early preview of TensorFlow 2.0 is expected soon. TensorFlow 2.0 is expected to come with high-level APIs, robust model deployment, powerful experimentation for research and simplified API. Easy model building with Keras This release will come with Keras, a user-friendly API standard for machine learning which will be used for building and training the models. As Keras provides various model-building APIs including sequential, functional, and subclassing, it becomes easier for users to choose the right level of abstraction for their project. Eager execution and tf.function TensorFlow 2.0 will also feature eager execution, which will be used for immediate iteration and debugging. The tf.function will easily translate the Python programs into TensorFlow graphs. The performance optimizations will remain optimum and by adding the flexibility, tf.function will ease the use of expressing programs in simple Python. Further, the tf.data will be used for building scalable input pipelines. Transfer learning with TensorFlow Hub The team at TensorFlow has made it much easier for those who are not into building a model from scratch. Users will soon get a chance to use models from TensorFlow Hub, a library for reusable parts of machine learning models to train a Keras or Estimator model. API Cleanup Many APIs are removed in this release, some of which are tf.app, tf.flags, and tf.logging. The main tf.* namespace will be cleaned by moving lesser used functions into sub packages such as tf.math. Few APIs have been replaced with their 2.0 equivalents like tf.keras.metrics, tf.summary, and tf.keras.optimizers. The v2 upgrade script can be used to automatically apply these renames. Major Improvements The queue runners will be removed in this release The graph collections will also get removed. The APIs will be renamed in this release for better usability. For example,  name_scope can be accessed using  tf.name_scope or tf.keras.backend.name_scope. For ease in migration to TensorFlow 2.0, the team at TensorFlow will provide a conversion tool for updating TensorFlow 1.x Python code for using TensorFlow 2.0 compatible APIs. It will flag the cases where code cannot be converted automatically. In this release, the stored GraphDefs or SavedModels will be backward compatible. With this release, the distribution to tf.contrib will no more be in use. Some of the existing contrib modules will be integrated into the core project or will be moved to a separate repository, rest of them will be removed. To know about this news, check out the post by the TensorFlow team on Medium. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs
Read more
  • 0
  • 0
  • 17060

article-image-foundationdb-open-sources-foundationdb-record-layer-with-schema-management-indexing-facilities-and-more
Natasha Mathur
15 Jan 2019
2 min read
Save for later

FoundationDB open-sources FoundationDB Record Layer with schema management, indexing facilities and more

Natasha Mathur
15 Jan 2019
2 min read
The FoundationDB team announced the open-source release of FoundationDB Record Layer, a relational database technology used by CloudKit,  yesterday. FoundationDB Record Layer is capable of storing the structured data in a similar fashion as a relational database. It comes with features such as schema management, indexing facilities, along with a rich set of query capabilities. The Record Layer is being used by Apple and offers support for the apps and services. Since the Record layer is built on top of FoundationDB, it is capable of inheriting FoundationDB's strong ACID semantics, and performance in a distributed setting. Apart from using its ACID (atomicity, consistency, isolation, durability) semantics, the Record Layer also uses FoundationDB's transactional semantics. This helps it provide features similar to the ones found in the traditional relational database but in a distributed setting. The design and a core feature of the Record layer were built in a way that allows it to scale to millions of concurrent users, diverse ecosystem of client applications, and query access patterns. Apart from that, the Record Layer comes with an ability to balance out the resource consumption across users in a predictable way. A combination of the Record layer and FoundationDB forms the backbone of CloudKit, a framework by Apple that provides an interface to move your data between apps and iCloud containers. Other highlights of the Record Layer include: support for transactional secondary indexing that takes full advantage of the Protocol Buffer data model. a declarative query API that is useful for retrieving data along with a query planner that turns those queries into concrete database operations. a large number of deep extension points that can help users build custom index maintainers and query planning features, allowing them to seamlessly “plug-in” new index types. For more information, check out the official FoundationDB announcement. FoundationDB open sources FoundationDB Document Layer with easy scaling, no sharding, consistency and more FoundationDB 6.0.15 releases with multi-region support and seamless failover management Introducing EuclidesDB, a multi-model machine learning feature database
Read more
  • 0
  • 0
  • 9707

article-image-google-home-and-amazon-alexa-can-no-longer-invade-your-privacy-thanks-to-project-alias
Savia Lobo
15 Jan 2019
2 min read
Save for later

Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!

Savia Lobo
15 Jan 2019
2 min read
Project Alias is an open-source, ‘teachable’ parasite that gives users increased control over their smart home assistants in terms of customization and privacy. It also trains the smart home devices to accept custom wake-up names while disturbing their built-in microphone, by simply downloading an app. Once trained, Alias can take control over your home assistant by activating it for you. Tellart designer Bjørn Karmann and Topp designer Tore Knudsen are the brilliant minds behind this experimental project. Knudsen says, “This [fungus] is a vital part of the rain forest, since whenever a species gets too dominant or powerful it has higher chances of getting infected, thus keeping the diversity in balance” He further added, “We wanted to take that as an analogy and show how DIY and open source can be used to create ‘viruses’ for big tech companies.” The hardware part of Project Alias is a plug-powered microphone/speaker unit that sits on top of a user’s smart speaker of choice. It’s powered by a pretty typical Raspberry Pi chipset. Input and output logic of Alias Both Amazon and Google have a poor track record of storing past conversations in the cloud. However, Project Alias promises of privacy.  According to FastCompany the smart home assistants “aren’t meant to listen in to your private conversations, but by nature, the devices must always be listening to a little to be listening at just the right time–and they can always mishear any word as a wake word.” Knudsen says, “If somebody would be ready to invest, we would be ready for collaboration. But initially, we made this project with a goal to encourage people to take action and show how things could be different . . . [to] ask what kind of ‘smart’ we actually want in the future.” To know more about Project Alias in detail, head over to Bjørn Karmann’s website or GitHub. Here’s a short video on the working of Project Alias https://player.vimeo.com/video/306044007 Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports France to levy digital services tax on big tech companies like Google, Apple, Facebook, Amazon in the new year    
Read more
  • 0
  • 0
  • 13011
article-image-cncf-releases-9-security-best-practices-for-kubernetes-to-protect-a-customers-infrastructure
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure

Melisha Dsouza
15 Jan 2019
3 min read
According to CNCF’s bi-annual survey conducted in August 2018, 83% of the respondents prefer Kubernetes for its container management tools. 58% of respondents use Kubernetes in production, while 42% are evaluating it for future use and 40% of enterprise companies (5000+) are running Kubernetes in production. These statistics give us a clear picture of the popularity of Kubernetes amongst developers as a container orchestrator. However, the recent security flaw discovered in Kubernetes (now patched) that enable attackers to compromise clusters and perform illicit activities, did raise concerns among developers. A container environment like Kubernetes consisting of multiple layers needs to be secured on all fronts. Taking this into consideration, the cncf has released ‘9 Kubernetes Security Best Practices Everyone Must Follow’ #1 Upgrade to the Latest Version Kubernetes has a quarterly update that features various bug and security fixes. Customers are advised to always upgrade to the latest release with updated security patches to fool proof their system. #2 Role-Based Access control (RBAC) Users can control who can access the Kubernetes API and what permissions they have by enabling the RBAC. The blog advises users against giving anyone cluster admin privileges and to grant access only as needed on a case-by-case basis. #3 Namespaces for security boundaries Namespaces generate an important level of isolation between components. Also, cncf states that it is easier to have various security controls and policies when workloads are deployed in separate namespaces #4 Keeping sensitive workloads separate Sensitive workloads should be run on a dedicated set of machines. This means that if a less secure application connected to a sensitive workload is compromised, the latter remains unaffected. #5 Securing Cloud Metadata Access Sensitive metadata storing confidential information such as credentials, can be stolen and misused. The blog advises users to use Google Kubernetes Engine’s metadata concealment feature to avoid this mishap. #6 Cluster Network Policies Developers will be able to control network access of their containerized applications through network policies. #7 Implementing a Cluster-wise Pod Security Policy This will define how workloads are allowed to run in a cluster. #8 Improve Node Security Users should ensure that the host is configured in the right way and that it is secure by checking the node’s configuration against CIS benchmarks. Ensure your network blocks access to ports that can be exploited by malicious actors and minimize the administrative access given to Kubernetes nodes. #9 Audit Logging Audit logs should be enabled and monitored for anomalous API calls and authorization failures. This an indicate that a malicious hacker is trying to get into your system. The blog advises users to further look for tools to assist them in continuous monitoring and protection of their containers.  You can head over to Cloud Native computing foundation official blog to read more about these best practices. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes      
Read more
  • 0
  • 0
  • 18520

article-image-elixir-1-8-released-with-new-features-and-infrastructure-improvements
Prasad Ramesh
15 Jan 2019
3 min read
Save for later

Elixir 1.8 released with new features and infrastructure improvements

Prasad Ramesh
15 Jan 2019
3 min read
In a blog post yesterday, Elixir 1.8 was announced. It comes with a variety of infrastructure level improvements, faster compilation time, common patterns, and added features around introspection of the system. Elixir is a functional general-purpose programming language that runs on top of the Erlang virtual machine. Custom struct inspections with Elixir 1.8 Elixir 1.8 has a derivable implementation of the Inspect protocol. This makes it simpler to filter data from existing data structures whenever they are inspected. If there is a user struct containing security and privacy sensitive information inspecting a user via inspect(user), will include all fields. This can cause information like emails and encrypted passwords to appear in logs or error reports. Defining a custom implementation of the Inspect protocol avoided this behavior. Elixir v1.8 makes it easier by allowing users to derive the Inspect protocol. Due to this, all user structs will be printed while all remaining fields are collapsed. Passing @derive {Inspect, except: [...]} will keep all fields by default and exclude only some. Custom time zone database support Elixir v1.8 defines a Calendar.TimeZoneDatabase behaviour that allows developers to add their own time zone databases. An explicit contract for time zone behaviours are defined, as a result, Elixir can now extend the DateTime API. This allows addition of functions like DateTime.shift_zone/3. The default time zone database in Elixir is Calendar.UTCOnlyTimeZoneDatabase which can only handle UTC. In other Calendar related improvements, Date.day_of_year/1, Date.quarter_of_year/1, Date.year_of_era/1, and Date.day_of_era/1 are added. Speedy compilation and performance improvements Improvements to the compiler have been made over 2018 which makes Elixir v1.8 compile code about 5% faster. The Elixir compiler also emits more efficient code used for range checks in guards, charlists with interpolation, and when working with records via the Record module. EEx templates are also optimized and emit more compact code which also runs faster. Better instrumentation and ownership with $callers The Task module is a way to spawn light-weight processes to perform concurrent work. When a new process is spawned, Elixir annotates the parent process via the $ancestors key. Instrumentation tools can use this information to track the relationship between events occurring within multiple processes. But many times tracking only $ancestors is not sufficient. Developers are recommended to start tasks under a supervisor, as it gives more visibility and control of task termination when a node shuts down. The relationship between code and the task is tracked via the $callers key present in the process dictionary. This aligns well with the existing $ancestors key. In Elixir 1.8 when a task is spawned directly from code without a supervisor, the parent process of the code will be listed under $ancestors and $callers. This feature allows instrumentation and monitoring tools to better track and relate the events in the system. This can also be used by the Ecto Sandbox which enables developers to run concurrent test against the database. It uses transactions and an ownership mechanism in which each process explicitly gets a connection assigned to it These were the major changes, for full list of improvements and bug fixes, you can take a look at release notes. Elixir Basics – Foundational Steps toward Functional Programming Erlang turns 20: Tracing the journey from Ericsson to Whatsapp Python governance vote results are here: The steering council model is the winner
Read more
  • 0
  • 0
  • 10261

article-image-mozilla-disables-the-by-default-adobe-flash-plugin-support-in-firefox-nightly-69
Bhagyashree R
15 Jan 2019
2 min read
Save for later

Mozilla disables the by default Adobe Flash plugin support in Firefox Nightly 69

Bhagyashree R
15 Jan 2019
2 min read
Yesterday, the Firefox team disabled the Adobe Flash plugin by default in Firefox Nightly 69, which will be eventually deprecated as per Mozilla’s Plugin Roadmap for Firefox. Users can still activate Flash on certain sites if they want to, through the browser settings. Flash support will be completely removed from the consumer versions of Firefox by early 2020. The Firefox Extended Support Release (ESR) will continue to support Flash till its end-of-life in 2020. Why Mozilla has decided to disable Adobe Flash? In recent years, we have seen a huge growth in web open standards like HTML5, WebGL, and WebAssembly. These technologies now come with various capabilities and functionalities for which we used to have plugins. Now, browser vendors prefer to integrate these capabilities directly into browsers and deprecate plugins. Hence, back in 2017, Adobe announced that along with their technology partners, Google, Mozilla, Apple, Microsoft, and Facebook, it is planning to end-of-life Flash. It also added that by the end of 2020, it will stop updating and distributing the Flash Player and encouraged content creators to migrate any of their content which is in Flash format into new open formats. Following this all the five partners announced their plan of action. Apple already did not supported Flash on iPhone, iPad, and iPod. For Mac users, Flash did not come pre-installed since 2010 and it was by default off if users decided to install it. Facebook announced that they are supporting game developers to migrate their Flash games to HTML5. Google will disable Flash by default in Chrome and remove it completely by the end of 2020. Microsoft also announced that they will phase out Flash from Microsoft Edge and Internet Explorer, eventually leading to the removal of Flash from Windows entirely by the end of 2020. Mozilla releases Firefox 64 and Firefox 65 beta Mozilla shares why Firefox 63 supports Web Components Introducing Firefox Sync centered around user privacy
Read more
  • 0
  • 0
  • 13328
article-image-googlers-for-ending-forced-arbitration-a-public-awareness-social-media-campaign-for-tech-workers-launches-today
Natasha Mathur
15 Jan 2019
4 min read
Save for later

Googlers for ending forced arbitration: a public awareness social media campaign for tech workers launches today

Natasha Mathur
15 Jan 2019
4 min read
There seems to be a running battle between Google and its employees for quite some time now. A group of Google employees announced yesterday that they’re launching a public awareness social media campaign from 9 AM to 6 PM EST today. The group, called, ‘Googlers for ending forced arbitration’ aims to educate people about the forced arbitration policy via Instagram and Twitter where they will also share their experiences about the same with the world.   https://twitter.com/endforcedarb/status/1084813222505410560 The group has researched their fellow tech employees, academic institutions, labour attorneys, advocacy groups, etc as well as the contracts of around 30 major tech companies, as a part of its efforts. They also published a post on Medium, yesterday, stating that “ending forced arbitration is the gateway change needed to transparently address inequity in the workplace”. According to National Association of Consumer Advocates, “In forced arbitration, a company requires a consumer or employee to submit any dispute that may arise to binding arbitration as a condition of employment or buying a product or service. The employee or consumer is required to waive their right to sue, to participate in a class action lawsuit, or to appeal”. https://twitter.com/ODwyerLaw/status/1084893776429178881 Demands for more transparency around Google’s sexual assault policies seems to have become a bone of contention for Google. For instance, two shareholders, namely, James Martin and two other pension funds sued Alphabet’s board members, last week, for protecting the top execs accused of sexual harassment. The lawsuit, which seeks major changes to Google’s corporate governance, also urges for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier this month, that she’s planning to leave the firm due to Google showing lack of leadership in case of the demands made by employees during the Google walkout. It was back in November 2018 when over 20,000 Google employees organized Google “walkout for real change” and walked out of their offices along with temps and contractors to protest against the discrimination, racism, and sexual harassment encountered within Google. Google employees had made five demands as part of the walkout, including ending forced arbitration for all employees (including temps) in cases of sexual harassment and other forms of discrimination. Now, although Google announced that it's ending its forced Arbitration policy as a response to the walkout (a move that was soon followed by Facebook) back in November, Google employees are not convinced. They argue that the announcement only made up for strong headlines, and did not actually do enough for the employees. The employees mentioned that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters (as of this publication, we have confirmed Google is still sending out offer letters with the old arbitration policy)”. Moreover, forced arbitration still exists in Google for cases involving other forms of workplace harassment and discrimination issues that are non-sexual in nature. Google has made the forced arbitration policy optional only for individual cases of sexual assault for full-time employees and still exists for class-action lawsuits and thousands of contractors who work for the company. Additionally, the employee contracts in the US still have the arbitration waiver in effect. “Our leadership team responded to our five original demands with a handful of partial policy changes. The other ‘changes’ they announced simply re-stated our current, ineffective practices or introduced extraneous measures that are irrelevant to bringing equity to the workplace”, mentions the group in a blog post on Medium. Follow the public awareness campaign on the group’s Instagram and Twitter accounts. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”
Read more
  • 0
  • 0
  • 13774

article-image-tumblr-open-sources-its-kubernetes-tools-for-better-workflow-integration
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

Tumblr open sources its Kubernetes tools for better workflow integration

Melisha Dsouza
15 Jan 2019
3 min read
Yesterday, Tumblr announced the open sourcing of three tools developed at Tumblr itself, that will help developers integrate Kubernetes into their workflows. These tools were developed by Tumblr throughout their eleven-year journey to migrate their workflow to Kubernetes with ease. These are the 3 tools and their features as listed on the Tumblr blog: #1 k8s- sidecar injector Containerizing complex applications can be time-consuming. Sidecars come as a savior option, that allows developers to emulate older deployments with co-located services on Virtual machines or physical hosts. The k8s-sidecar injector dynamically injects sidecars, volumes, and environment data into pods as they are launched. This reduced the overhead and work involved in copy-pasting code to add sidecars to a developer's deployments and cronjobs. What’s more, the tool listens to the specific sidecar to be injected, contained within the Kubernetes API for Pod launch. This tool will be useful when containerizing legacy applications requiring a complex sidecar configuration. #2 k8s-config-projector The k8s-config projector is a command line tool that was generated out of the necessity of accessing a subset of configuration data (feature flags, lists of hosts/IPs+ports, and application settings) and a need to be informed as soon as this data changes. Config data defines how deployed services operate at Tumblr. Kubernetes ConfigMap resource enables users to provide their service with configuration data. It also allows them to update the data in running pods without redeployment of the application. To use this feature to configure Tumblr’s services and jobs in a Kubernetes-native manner, the team had to bridge the gap between their canonical configuration store (git repo of config files) to ConfigMaps. k8s-config-projector combines the git repo hosting configuration data with “projection manifest” files, that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. Developers can now encode a set of configuration data that the application needs to run into a projection manifest. The blog states that ‘as the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed’. #3 k8s-secret-projector Tumblr stores secure credentials (passwords, certificates, etc) in access controlled vaults. With k8s-secret-projector tool, developers will now be able to request access to subsets of credentials for a given application. This can be done now without granting the user access to the secrets as a whole. The tool ensures applications always have the appropriate secrets at runtime, while enabling automated systems including certificate refreshers, DB password rotations, etc to automatically manage and update these credentials, without the need to redeploy/restart the application. It performs the same by combining two repositories- projection manifests and credential repositories. A Continuous Integration (CI) tool like Jenkins will run the tool against any changes in the projection manifests repository. This will generate new Kubernetes Secret YAML files which will lead to the Continuous Deployment to deploy the generated and validated Secret files to any number of Kubernetes clusters. The tool will allow secrets to be deployed in Kubernetes environments by encrypting generated Secrets before they touch the disk. You can head over to Tumblr’s official blog for examples on each tool. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes
Read more
  • 0
  • 0
  • 13239
Modal Close icon
Modal Close icon