Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-liz-fong-jones-prominent-ex-googler-shares-her-experience-at-google-and-grave-concerns-for-the-company
Natasha Mathur
14 Feb 2019
5 min read
Save for later

Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company

Natasha Mathur
14 Feb 2019
5 min read
“I can no longer bail out a raft with a teaspoon while those steering, punch holes in it”-- Liz Fong Jones Liz Fong Jones, former Google Engineer and an activist known for being outspoken about employee rights in the Silicon Valley, published a post on Medium yesterday. In the post, she talks about the ‘grave concerns’ related to strategic decisions made at Google and the way it has ‘misused its power’ by keeping profits above the well-being of people. Jones, who emerged as a prominent figure in the field of Site Reliability Engineering had joined Google 11 years ago. However, she left the company last month, citing Google’s lack of leadership in response to the Google walkout demands in November 2018. “I can’t continue burning myself out pushing for change. Instead, I am putting my own health first by joining a workplace that has a more diverse and fair working environment”, writes Jones. Google Walkout was a response to a report on workplace sexual harassment by the New York Times in October 2018. The report revealed that Google protected its senior execs accused of sexual abuse within the workplace and also paid them heavy exit packages ($90 million payout to Andy Rubin). Jones mentions that it was this event that “utterly shattered employees’ trust and goodwill in the management”. She mentions that Google management failed to effectively address the Google Walkout demands. Apart from not meeting the structural demand for having an employee representative on board, Google also didn’t entirely put an end to forced arbitration within the workplace. A group of Google employees, called ‘Googlers for ending forced arbitration’ launched a public awareness social media campaign, last month, to educate people across industries about the forced arbitration policy via Instagram and Twitter. They argued that the Google announcement regarding ending forced arbitration only made up for strong headlines, and did not actually do enough for the employees. This is because although Google made forced arbitration optional in case of sexual harassment for the employees ( excluding contractors, temps), it didn’t do so for other forms of discrimination. Google TVCs also wrote an open letter to Google’s CEO, last December, demanding for equal benefits and treatment. They also reiterated the demands of the walkout in the letter. Additionally, two shareholders, James Martin, and the pension funds also sued Alphabet’s board members for protecting the top execs accused of sexual assault, last month. The lawsuit alleged that Google directors agreed to pay Rubin to ‘ensure his silence’ and suppress information about the misconduct of other executives. Jones further states that Google also filed a motion before the National Labor Relations Board to overturn a ruling that permitted employees to organize on company email and document systems. Other than the issues faced during the Google walkout, Jones also wrote about issues during the Google+ launch, when the Google employees including herself, opposed against the Google + ‘real name’ policy. As per the policy, people have to use their legal names on the platform. “In doing so, Google+ would create yet another space inaccessible to some teachers, therapists, LGBT+ people, and others who need to use a different identity for privacy and safety”, mentions Jones. She writes that despite the employees’ opposition, Google+ launched in mid-2011 with a real-name policy. Moreover, there was also an increase in harassment, doxxing (harassment method that reveals a person’s personal information on the internet) and hate speech targeted at marginalized employees within Google’s internal communications. Management, however, silently tolerated it. In fact, in case the employees attempted to internally raise concerns about harassment (via the official channels), they were either ignored or punished for doing so. Another common issue was an increasing willingness to compromise on ethics for profit. “Google will need to fundamentally change how it is run in order to win back the trust of workers and prevent a catastrophic loss of long-tenured employees, especially those from vulnerable groups”, writes Jones. Other than that, Jones is contributing $100,000 payout from Google to support Google workers (esp.contractors and H-1B workers) who may face retaliation for their future organizing. Other workers have also pledged a further $150,000. Other groups such as Coworker.org and the Tech Workers Coalition, are also helping Jones and other employees learn more about their rights. She also mentioned that although she’s no longer a part of Google, she will remain ‘fiercely loyal’ to its employees who have committed themselves to develop ethical products and who continue to advocate for equal and fair treatment of their fellow colleagues.  “The labor movement at Google is larger and stronger than ever, and it will continue to advance human rights..regardless of whether management supports them”, writes Jones. For complete information, check out the official Liz Fong Jones Medium post. Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’ Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers
Read more
  • 0
  • 0
  • 12124

article-image-regulate-google-facebook-and-other-online-platforms-to-protect-journalism-says-a-uk-report
Bhagyashree R
14 Feb 2019
4 min read
Save for later

Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report

Bhagyashree R
14 Feb 2019
4 min read
On Tuesday, Frances Cairncross, a British economist, journalist, and academic shared her work in a report, which touches upon the present and future of the news media market. This independent report named, The Cairncross Review: A Sustainable Future of Journalism, suggests that online platforms including Google, Facebook, Apple, and Twitter should be regulated in their way of distributing news. Along with giving an insight into the current state of the news media market, it suggests some measures online platforms could take to ensure the financial sustainability of publishers. It also shows how search engines, social media platforms, and digital advertising are impacting the news market. The report highlights that most part of the digital advertising revenue goes to Google and Facebook because of their reach and also the personal data of users they collect. This makes it difficult for publishers to compete and as a result, their revenue has seen some dip. To address this revenue gap between online platforms and the publishers and also prevent the spread of misinformation, the review suggests that it’s time for the government to step in. How government can bridge the revenue gap and prevent the spread of fake news? The review proposes online platforms to define ‘codes of conduct’ to put a check on the commercial arrangements between publishers and online platforms. To ensure compliance with these codes of conduct, this process should be overseen by a government regulator. The regulator will be someone who has an understanding of both economics and digital technology. They will have the powers to command information and can also set out a compulsory set of minimum requirements for these codes. In recent years, online platforms have come constantly under public scrutiny because of the spread of fake news and misinformation. That is why these platforms started putting into place some measures to help users identify reliability and the trustworthiness of sources. Though the review recommends expanding these efforts, it adds that “This task is too important to leave entirely to the judgment of commercial entities.” This essentially means these measures will also be regulated. Initially, this will limit to gathering information on the steps online platforms are taking to improve people’s awareness of the origins and quality of the news they read. In a discussion on Hacker News, some users were for this proposal while some were against. One of the users commented, ”So on to the idea that Google and Facebook should be regulated, I think it's an absolutely horrible idea. We are talking of censorship conducted by the government, the worst kind there is. And thinking of our own government, I can't think of people that are more corrupt or incompetent. Just fucking educate people on fact-checking and elementary logic. Push for some lessons in high-school or whatever.” Another user added, “Facebook or Google need to be regulated because they are radicalizing people in search of more engagement to sell more ads. That is not a good situation, and there is no incentive for them to stop doing it. As it is the best way to get more money.” In a different report published by The Wall Street Journal, it was reported that Apple and major publishers are negotiating over a subscription news service. Apple has suggested a revenue split according to which 50% of a suggested $10/month membership fee will go to Apple. The remaining 50% will be shared among the participating publishers. The WSJ reports that publishers will most likely not be agreeing to this revenue split. For more details, read the report published by Frances Cairncross. Google announces the general availability of a new API for Google Docs Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’
Read more
  • 0
  • 0
  • 7987

article-image-fair-releases-a-new-elf-opengo-bot-with-a-unique-archive-that-can-analyze-87k-professional-go-games
Natasha Mathur
14 Feb 2019
3 min read
Save for later

FAIR releases a new ELF OpenGo bot with a unique archive that can analyze 87k professional Go games

Natasha Mathur
14 Feb 2019
3 min read
It was last year in May when Facebook AI Research (FAIR) released an open source ‘ELF’ OpenGo bot, an AI bot that has defeated world champion professional Go players, based onits existing ELF platform for Reinforcement Learning Research. Yesterday, FAIR announced new features and research results related to ELF OpenGo, including an updated model, a Windows executable version of the bot, and a unique archive analyzing 87k professional Go games. ELF OpenGo, an open-source reimplementation of the AlphaZero algorithm, is the first open-source Go AI that has convincingly demonstrated superhuman performance, achieving a (20:0) record against global top professionals. The FAIR team have updated the ELF OpenGo model, by re-training the model from scratch, the team has also provided a data set of 20 million self-play games and the 1,500 intermediate models used to generate them. This in turn, further reduces the need for computing resources. Putting the new model to test The FAIR team further used the new model to analyze the games played by professional human players, however, they observed that its ability to predict the players’ moves went down very early during its learning process (after less than 10 percent of the total training time). But, as the model continued undergoing training, its skills levels also kept on improving. This ultimately led to it beating the team’s earlier prototype ELF OpenGo model 60 percent of the time. “That prototype system was already outperforming human experts, having achieved a 20-0 record against four Go professionals ranked among the top 30 players in the world”, states the team. However, the exploration of ELF OpenGo's learning process also revealed some important limitations specific to deep RL. For instance, similar to AlphaZero, new ELF OpenGo model never fully master the concept of “ladders" i.e. a common technique in which one player traps the other's stones in a long formation. FAIR team then curated a data set of 100 ladder scenarios and evaluated ELF OpenGo's performance with them. Analyzing 87k professional Go games The team has come out with a new interactive tool based on ELF OpenGo's analysis of 87,000 games played by humans. This data set spans 1700 to 2018 and their system is constantly evaluating the quality of individual moves depending on the agreement between the moves predicted by the bot and the human players. “Though the tool encourages deep dives into specific matches, it also highlights important trends in Go. In analyzing games played over that period of more than 300 years, the bot found average strength of play has improved fairly steadily”, states the team. You can also analyze individual players such as Honinbo Shusaku, (the most famous Go player in history), which in turn would show different trends in comparison with ELF OpenGo, depending on the stage within the gameplay. “Though ELF OpenGo is already being used by research teams and players around the world, we're excited to expand last year's release into a broader suite of open source resources. By making our tools and analysis fully available, we hope to accelerate the AI community's pursuit of answers to these questions”, states the FAIR team. For more information, check out the official ELF OpenGo Bot announcement. Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers
Read more
  • 0
  • 0
  • 9845

article-image-batch-a-special-case-of-streaming
Amrata Joshi
14 Feb 2019
4 min read
Save for later

Batch: a Special Case of Streaming

Amrata Joshi
14 Feb 2019
4 min read
Last week, the team at Apache announced that Alibaba decided to contribute its Flink-fork, called Blink, back to the Apache Flink project. A unified approach to Batch and Streaming Apache Flink has been following the philosophy of taking a unified approach to batch and streaming data processing. The core building block is “continuous processing of unbounded data streams.” With this continuous processing, users can also do offline processing of bounded data sets. The batch is considered as the special case of streaming and is supported by various projects such as Flink, Beam, etc. It is known as a powerful way of building data applications that generalize across real-time and offline processing and further reduces the complexity of data infrastructures. “Batch is just a special case of streaming does not mean that any stream processor is now the right tool for your batch processing use cases.” Pure stream processing systems are slow at batch processing workloads. A stream processor that shuffles through message queues to analyze large amounts of available data is not useful. Unified APIs such as Apache Beam delegate to different runtimes based on whether the data is continuous/unbounded of fix/bounded. For example, the implementations of the batch and streaming runtime of Google Cloud Dataflow are different, to get the desired performance and resilience in each case. Apache Flink has a streaming API that can do bounded/unbounded use cases and also offers a separate DataSet API and runtime stack which is faster for batch use cases. What can be improved? To make Flink’s experience on bounded data (batch) state-of-the-art, few enhancements are required. A truly unified runtime operator stack Currently, the bounded and unbounded operators have a different network and threading model which doesn’t mix and match. Continuous streaming operators are the foundation in a unified stack. While operating on bounded data without latency constraints, the API or the query optimizer can easily select from a larger set of operators. Exploiting bounded streams to reduce the scope of fault tolerance While input data is bounded, it is possible to completely buffer data during shuffles and also replay that data after a failure. This makes recovery fine-grained and much more efficient. Exploiting bounded stream operator properties for scheduling A continuous unbounded streaming application needs all the operators that are running at the same time. An application on bounded data can schedule operations depending on how the operators consume data which increases resource efficiency. Enabling these special case optimizations for the DataStream API Currently, only the Table API activates the optimizations while working on bounded data. Performance and coverage for SQL In order to be competitive with the best batch engines, Flink needs more coverage and performance for the SQL query execution. As the core data-plane in Flink is high performance, the speed of SQL execution depends on optimizer rules, a rich set of operators, and also features like code generation. Merging Blink and Flink As Blink’s code is currently available as a branch in the Apache Flink repository, it is difficult to merge big amount of changes and making the merge process as non-disruptive as possible. The merge plan focuses on the bounded/batch processing features and follows the following approach to ensure a smooth integration: For merging Blink’s SQL/Table API query processor enhancements, the team can work in an easier way as both Flink and Blink have the same APIs: SQL and the Table API. Following some restructuring of the Table/SQL module, the team plans to merge the Blink query planner (optimizer) and runtime (operators) as an additional query processor next to the current SQL runtime. Users will be able to select which query processor to use, initially. After a transition period, the current processor will be deprecated and eventually dropped. The Flink community is working on refactoring its current schedule and adding support for pluggable scheduling and fail-over strategies. Once this is done, the team can add Blink’s scheduling and recovery strategies as a new scheduling strategy that will be used by the new query processor. The new scheduling strategy will also be used for bounded DataStream programs. To know more, check out Apache’s official post. LLVM officially migrating to GitHub from Apache SVN Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers  
Read more
  • 0
  • 0
  • 2012

article-image-you-can-now-install-windows-10-on-a-raspberry-pi-3
Prasad Ramesh
14 Feb 2019
2 min read
Save for later

You can now install Windows 10 on a Raspberry Pi 3

Prasad Ramesh
14 Feb 2019
2 min read
The WoA Installer for Raspberry Pi 3 enables installing Windows 10 on the credit card size computer. The WoA Installer for Raspberry Pi 3 is made by the same members who brought Windows 10 ARM to the Lumia 950 and 950 XL. Where to start? To get started, you need Raspberry Pi 3 Model B or B+, a microSD card of at least class 1, and a Windows 10 ARM64 Image which you can get from GitHub. You also need a recent version of Windows 10 and .NET Framework 4.6.1. The WoA Installer is just a tool which helps you to deploy Windows 10 on the Raspberry Pi 3. WoA Installer needs the Core Package in order to run. You can find them listed on the GitHub page. Specification comparison Regarding specifications, the minimum requirements for Windows 10 is: Processor: 1 gigahertz (GHz) or faster processor or SoC. RAM: 1 gigabyte (GB) for 32-bit or 2 GB for 64-bit. Hard disk space: 16 GB for 32-bit OS 20 GB for 64-bit OS The Raspberry Pi 3B+ has specifications just good enough to run Windows 10: SoC: Broadcom BCM2837B0 quad-core A53 (ARMv8) 64-bit @ 1.4GHz RAM: 1GB LPDDR2 SDRAM While this sounds good, a Hacker news user points out: “Caution: To do this you need to run a rat's nest of a batch file that runs a bunch of different code obtained from the web. If you're going to try this, try on devices you don't care about. Or spend innumerable hours auditing code. Pass -- for now.” You can check out the GitHub page for more instructions. Raspberry Pi opens its first offline store in England Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25
Read more
  • 0
  • 0
  • 24390

article-image-experts-respond-to-trumps-move-on-signing-an-executive-order-to-establish-the-american-ai-initiative
Amrata Joshi
14 Feb 2019
4 min read
Save for later

Experts respond to Trump’s move on signing an executive order to establish the American AI Initiative

Amrata Joshi
14 Feb 2019
4 min read
On Monday, U.S. President Donald Trump signed an executive order laying out a national plan to boost the leadership in Artificial Intelligence (AI) technology by establishing American AI Initiative. According to most of the experts, this move seems to be aimed at China's swift rise in AI. Nearly two years ago, the Chinese government released its own sweeping AI plan and has committed tens of billions of dollars in spending toward developing it. But it seems Trump's new order indicates the American response to China. The official announcement framed it as an effort to win an AI arms race. The announcement states, “Americans have profited tremendously from being the early developers and international leaders in AI. However, as the pace of AI innovation increases around the world, we cannot sit idly by and presume that our leadership is guaranteed.” The announcement further mentioned five major areas of action: Investing in AI Research and Development (R&D) by having federal agencies increase funding for AI R&D Making federal data and computing power more available for AI purposes and further unleashing AI resources Setting AI government standards for safe and trustworthy AI Building and training an AI workforce Engaging with international allies and also protecting the tech from foreign adversaries Trump said in a statement, accompanying the order, "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States." Trump’s executive order did not allocate any additional federal funding towards executing the AI vision. But the document, instead, calls on federal agencies to prioritize existing funds toward AI projects. Response by the experts In a response to IEEE Spectrum for the take on the announcement, most of the experts said that it might be a response to China’s AI policy, which calls for major investment to make China the world leader in AI by 2030. In an interview, the former head of Google China recently even explained to IEEE Spectrum why China has the edge in AI. According to Darrell West, director of the Brookings Institution’s center for technological innovation and author of the recent book, The Future of Work: Robots, AI, and Automation, Trump is trying to set the American AI Initiative at the top and compete well in the race of AI. Also, according to him, the idea seems unclear with respect to implementation. He said, “Trump is signing an executive order on AI because it is the transformative technology of our time and he needs a national strategy on how to retain U.S. preeminence in this area. Critics complain there is no national strategy, so he is using the executive order to explain how the government can help through R&D support, workforce development, and infrastructure enhancement. The order is a step in the right direction, but it is not clear whether there is new funding to support the initiative or how it will be implemented.” Daniel Castro, director of the Center for Data Innovation was a bit positive on this go and said, “Ensuring American leadership in artificial intelligence is critical for U.S. competitiveness. Accelerating the development and adoption of AI holds the potential to increase productivity, grow the economy, and harness the many societal benefits the technology can bring. The administration’s initiative will prioritize AI research and training programs and boost auxiliary infrastructure such as data and other inputs.” Amy Webb, a “quantitative futurist” and author of a forthcoming book about AI called The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity, doesn’t envy the legislators. According to him the American AI Initiative is vague and lacks details. In a statement, he said, “The American AI Initiative at the moment is a collection of bullet points. It is vague at best and makes zero mention of detailed policy, a concrete funding plan, or a longer-term vision for America’s future.” Lawmakers and major tech companies are happy because of this move. Most of the tech companies now see this as an opportunity to cash in on AI. Intel said in a statement that, “It makes "perfect sense" for federal agencies to play a "key role in AI implementation." To know more about this news, check out the official announcement. The US to invest over $1B in quantum computing, President Trump signs a law Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology The U.S. just launched the American AI Initiative to prioritize AI research and development
Read more
  • 0
  • 0
  • 9087
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tyler-tringas-co-founder-of-earnest-capital-goes-live-on-hacker-news-to-answer-comments-as-ec-launches
Sugandha Lahoti
14 Feb 2019
9 min read
Save for later

Tyler Tringas, co-founder of Earnest Capital goes live on Hacker News to answer comments as EC launches

Sugandha Lahoti
14 Feb 2019
9 min read
Yesterday, Tyler Tringas, co-founder of Earnest Capital went live on Hacker News to answer questions. For those not aware, Earnest Capital provides funding and mentorship for bootstrappers, indie startups, and hackers mostly in SaaS, e-commerce, and scalable online education. Earnest Capital has been receiving a lot of attention because of their different investing structure than traditional VCs or accelerators. They have a novel Shared Earnings Agreement investing model. Key attributes of a Shared Earnings Agreement by Earnest Capital We invest upfront capital at the early-stage of businesses. Typically (but not always) after a product has launched, but before the founders go full-time. We agree on a Return Cap which is a multiple of the initial investment (typically 3-5x) We don’t have any equity or control over the business. No board seats either. You run your business as you see fit. As your business grows we calculate what we call “Founder Earnings” and Earnest is paid a percentage. Essentially we get paid when you and your co-founder get paid. Founder Earnings = Net Income + any amount of founders’ salaries over a certain threshold. If you want to eat ramen, pay yourselves a small salary, and reinvest every dollar into growth, we don’t get a penny and that’s okay. We get earnings when you do. Unlike traditional equity, our share of earnings is not perpetual. Once we hit the Return Cap, payments to Earnest end. In most cases, we’ll agree on a long-term residual stake for Earnest if you ever sell the company or raise more financing. We want to be on your team for the long-term, but don’t want to provide any pressure to “exit.” If you decide you want to raise VC or other forms of financing, or you get an amazing offer to sell the company, that’s totally fine. The SEA includes provisions for our investment to convert to equity alongside the new investors or acquirers. The Hacker News conversation was a big hit, with Tyler mostly answering questions and offering pieces of advice to upcoming entrepreneurs while also explaining their novel strategy. Per Tyler, Earnest Capital works like a “profit-share + a SAFE”. The primary function is for them is to share in the profit (or more specifically "founder earnings") of the business alongside the founder(s). If later a business owner decides to sell the business or raise a big equity round later, Earnest converts into a SAFE. On how it is different from a Venture Capitalist Investment A user asked, “If I read your agreement correctly, your terms are so that you invest $150k in what is or nearly is a bootstrapped business, on what essentially seems like a profit share basis, and expect to get paid for your doing so until you've made $3M?” Tyler’s response, “$3m?! No. We have a Return Cap which is negotiated on a per deal basis but we guide toward 3-5x the initial investment. This post walks through each of the terms in detail.” Traditional VC model usually works in cycles. The founders raise some money, then they build and sell, raise some more, build and sell. This cycle continues by the time they have raised enough money to be at least close to a profit. “How does that work with bootstrappers? Ideally, they'd only have to raise money once (from you), but what happens after the $100k (example) run out and the business is only generating, let's say, $2k/month? Back to 9-to-5?” asked another user. Tyler argued that this is true only in some cases. Earnest Capital’s goal is for founders to get to “personal break-even, where they can pay themselves enough to work on the business full-time, by the time our investment runs out. Some percentage of these will fail (startups are hard) and we're expecting that”, he added. On comparison with TinySeed People also appreciated the novel non-VC funding space asking Tyler to do a quick compare/contrast with TinySeed, (TinySeed is also a startup accelerator designed for bootstrappers) of similar ilk and recency. Tyler commented, “Specific to the funding model, we both do a kind of profit-share, with the main difference being that Earnest's repayment will usually happen earlier (assuming the business is successful) but is capped, Tinyseed payments would be smaller in the earlier years and keep growing over time perpetually. Neither one is "better" and I probably wouldn't advise founders to choose between an offer from both on the basis of just the funding model.” On their 3-5x return cap People also had concerns regarding the 3-5x return cap. Some called Earnest Capital “more of a charity than a profit-making enterprise?” to the extent of calling it altruistic. A Hacker news user observed, “For the math to work out, with a 3x cap on earnings, you need 33% of the businesses to be successful just to get your money back. At 5x you still need 20%. And that is over however long it takes for those companies to reach payback, which could be measured in decades for some of the companies.” To this, Tyler said, “We also have a residual, uncapped, % option if the founder ever sells the business. This keeps us aligned with the founders to keep helping them grow the value of the business for the long-term even after the Return Cap is paid back.” He added that they are preparing to fine tune the return cap model building, measuring, learning, and iterating as we go. Basically, he added, “By default, we don't take equity (shares, a board seat, none of that). If you decide to raise a round of equity financing (ie VC) we could convert into equity alongside them and if you sell the company we get a % of that.” A user countered it saying that “This return cap is stated as 9.5% in the spreadsheet. But, where did that come from? Is this is a stock option, convertible note, or equity position?” Another user added, “They don't explain it, but it sounds like the whole deal is effectively seed financing where they eventually get a 9.5% stake, but also with a 3-5x loan interest payment once you make money (which they're framing as "Shared Earnings"). And they're trying to hide the 9.5% part.” Tyler offered no comments on this thread. On being asked why return cap is better than just taking out like a 20-30% APR business loan, Tyler said, “At the risk of not answering the question, I'd say no form of capital is "better" than any other. Capital is a tool and the job of the founder is to find the option (both on payments term and other aspects like mentorship or personal exposure) that best aligns with their goals.” On his thoughts on Jerry Neumann’s theory Jerry Neumann, who is a Venture Capitalist at Neu Venture Capital has a theory about why "there's a reason why for decades, there were only bank loans and VC and not much in-between." Per his theory, there are 3 categories of companies (determined by the alpha value of the power-law distribution they're in): Companies where the risk and the upside potential are small. This is where bank loans are focused. Companies where the risk is enormous but the upside potential is "meh". Companies where the risk is enormous but the upside potential is also enormous. This is where VC is focused, and it's why they're all about finding those few big hits because this covers all the losses (or mediocre performance) of the rest. Neumann appears pretty confident about this hypothesis; not because he can explain the underlying phenomenon, but simply because until now he's not seen much successful funding for companies that's neither VC nor bank loans. A hacker news user points, that “if his hypothesis is right, then Earnest Capital is targeting companies of type 2: investments with enormous risk (comparable to that of a high-growth startup) but at the same time you're hard-capping your upside at 5x. That seems madness.” Tyler comments, “I like Jerry's work a lot but come to a different conclusion. My basic thesis is that we're in the deployment age of the internet/web/mobile era and there is a whole new wave of a lot lower risk and a bit fewer reward opportunities for companies to bring the "peace dividend" of the software areas into markets that are not winner-take-all. The upside is these businesses are much more capital efficient, can scale and potentially produce much higher returns than SMBs from previous eras. The downside is they have no collateral and are thus completely unbankable for traditional small business lending. We need a new default form of capital for entrepreneurs and we are trying to build it.” Overall people were generally appreciative of Earnest Capital and wished the company success. Here are some positive responses. “Hi Tyler, this is amazing! Going through the FAQ it seems, you're not investing in India right now! Would love to know if and when you do!” “Very cool, Tyler. Just applied. :)” “Congratulations on launching! Sounds like a nice compliment to IndieVC and TinySeed. I'm glad to see innovation here, interesting times!” “Awesome to see innovation from capital providers on the instrument in the wild. As a niche market founder wish option like Earnest existed when we were raising early financing.” We recommend you to go through the entire thread on Hacker News. It makes for a very insightful conversation. A Quick look at ML in algorithmic trading strategies Why Google kills its own products Mary Meeker, one of the premier Silicon Valley investors, quits Kleiner Perkins to start her own firm.
Read more
  • 0
  • 0
  • 2206

article-image-openssl-3-0-will-have-significant-changes-in-architecture-will-include-fips-module-and-more
Melisha Dsouza
14 Feb 2019
3 min read
Save for later

OpenSSL 3.0 will have significant changes in architecture, will include FIPS module and more

Melisha Dsouza
14 Feb 2019
3 min read
On 13th February, the OpenSSL team released a blog post outlining the changes that users can expect in the OpenSSL 3.0 architecture and plans for including a new FIPS module. Architecture changes in OpenSSL 3.0 ‘Providers’ will be introduced in this release which will be a possible replacement for the existing ENGINE interface to enable more flexibility for implementers. There will be three types of Providers: the “default” Provider will implement all of the most commonly used algorithms available in OpenSSL. The “legacy” Provider will implement legacy cryptographic algorithms and the “FIPS” Provider will implement FIPS validated algorithms. Existing engines will have to be recompiled to work normally and will be made available via both the old ENGINE APIs as well as a Provider compatibility layer. The architecture will include Core Services that will form the building blocks usable by applications and providers. Providers in the new architecture will implement cryptographic algorithms and supporting services. It will have implementations of one or more of the following: The cryptographic primitives (encrypt/decrypt/sign/hash etc)  for an algorithm Serialisation for an algorithm Store loader back ends   A Provider may be entirely self-contained or it may use services provided by different providers or the Core Services.     Protocol implementations, for instance TLS, DTLS.  New EVP APIs will be provided in order to find the implementation of an algorithm in the   Core to be used for any given EVP call.  Implementation agnostic way will be used to pass information between the core library and the providers.  Legacy APIs that do not go via the EVP layer will be deprecated. The OpenSSL FIPS Cryptographic Module will be self-contained and implemented as a dynamically loaded provider. Other interfaces may also be transitioned to use the Core over time  A majority of existing well-behaved applications will just need to be recompiled. No deprecated APIs will be removed in this release You can head over to the draft documentation to know more about the features in the upgraded architecture. FIPS module in OpenSSL 3.0 The updated architecture incorporates the FIPS module into main line OpenSSL. The module is dynamically loadable and will no longer be a separate download and support periods will also be aligned. He module is a FIPS 140-2 validated cryptographic module that contains FIPS validated/approved cryptographic algorithms only. The FIPS module version number will be aligned with the main OpenSSL version number. New APIs will give applications greater flexibility in the selection of algorithm implementations. The FIPS Provider will implement a set of services that are FIPS validated and made available to the Core. This includes: POST: Power On Self Test KAT: Known Answer Tests Integrity Check Low Level Implementations Conceptual Component View of OpenSSL 3.0 Read the draft documentation to know more about the FIPS module in the upgraded architecture. Baidu Security Lab’s MesaLink, a cryptographic memory safe library alternative to OpenSSL OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security Transformer-XL: A Google architecture with 80% longer dependency than RNNs    
Read more
  • 0
  • 0
  • 4398

article-image-drafts-of-article-13-and-the-eu-copyright-directive-have-been-finalized
Prasad Ramesh
14 Feb 2019
3 min read
Save for later

Drafts of Article 13 and the EU Copyright Directive have been finalized

Prasad Ramesh
14 Feb 2019
3 min read
Last week, the EU legislators met to finalize the EU Copyright Directive. Yesterday, in an informal meeting, members of the European Parliament and the Council drafted up a final text for the new EU copyright directive. As explained in the blog post by German politician Julia Reda, this law which has been in discussion since two years can stop copyright material being shared over the internet. As of now, the responsibility to control the illegal sharing of copyrighted material is on the copyright holders. The new law could make big internet companies like YouTube, Facebook etc,. Responsible for enforcing copyright protection. The EU copyright directive includes Article 11 and Article 13 which are being dubbed as “link tax” and “meme ban” respectively. Article 13 would require platforms to use expensive and error-prone upload filters to filter the upload of any copyright material in the entire world. All except small and new sites will also be directly liable for any copyright infringement that may arise due to copyright filters not being good enough as decided by any court. Article 11 which has already failed in Germany would make licenses mandatory for using more than very short extracts for reproducing news stories. Read next: What the EU Copyright Directive means for developers – and what you can do Reda mentions in her blog post that “The history of this law is a shameful one”. She explains that the purpose of this law was to never solve copyright issues but to serve the special interests of, possibly a few powerful media organizations. In pursuit of implementing this law, concerns of academics, researchers and startups have been ignored, Reda tells in her post. If both Parliament and Council vote for it, it’ll become the EU law. The best chance to stop this from happening is in the upcoming Parliament vote. Earlier, another parliament member, Axel Voss had defamed the protest of millions of netizens. YouTube has also been protesting against this law which would cause millions of YouTube videos to be blocked in the EU. Many YouTubers have made videos protesting against Article 13. YouTube argues that while intentions of protecting creativity is good, the text drafted by the European Parliament will cause other undesirable effects. A Hacker news user suggests cutting off Google services in Europe to show what passing this law would mean: “Google should just cut off YouTube and other impacted platforms for 24 hours in the EU with a message telling them that the service is against Article 13. It might sound absurd but it would be effective, and this is the real danger with article 13.” Another user states his displeasure: “This is absolutely horrible and MEPs [member of European Parliament] voting for this do not have EU citizens best interests in mind.” Another user thinks that media companies are behind this: “This is the destruction of the internet. I don't think there's anything other than lobbying from the European media industry behind this directive. At least we can assume decentralized systems will get more attention in the future.” The Electronic Frontier Foundation also Tweeted about the effects of the copyright directive. https://twitter.com/EFF/status/1095775278683512832 You can keep an eye on the voting behavior and also sign the record-breaking petition. JPEG committee wants to apply blockchain to image sharing EU legislators agree to meet this week to finalize on the Copyright Directive
Read more
  • 0
  • 0
  • 8527

article-image-react-native-0-59-rc0-is-now-out-with-react-hooks-and-more
Bhagyashree R
14 Feb 2019
2 min read
Save for later

React Native 0.59 RC0 is now out with React Hooks, and more

Bhagyashree R
14 Feb 2019
2 min read
Last week, Dan Abramov, one of the React developers, while announcing the release of React 16.8, shared that React Native 0.59 will be released with Hooks. The team was waiting for Hooks to land in React’s stable version before they support it in the next release of React Native. Today, they have released React Native 0.59 RC0, which along with React Hooks, ships with extracted React Native CLI, major QoL improvements, and more. Here are some of the major changes this version will come with: As mentioned earlier, React Native 0.59 will come with React Hooks. This feature allows you to “hook into” or use React state and other lifecycle features via function components. The React Native GitHub repo was quite big, and that is why the team decided to move some of the components to separate repos. As a result, the React Native CLI now has a separate repository. The team has also removed WebView from the React Native code. Now developers need to use its extracted version. Though the team has not shared which exact components will be deprecated in the release notes, this version will surely deprecate quite a few components. Along with these changes, many QoL improvements have been made on the native Android side such as 64 bits support via a new JSC, AppCompatActivity, etc. As this is not a stable release, developers are recommended not to upgrade to React Native 0.59 unless they want to collaborate in testing. To know more in detail, check out React Native’s release notes. The React Native team shares their open source roadmap, React Suite hits 3.4.0 React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more! JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 17473
article-image-red-hat-satellite-to-drop-mongodb-and-will-support-only-postgresql-backend
Melisha Dsouza
14 Feb 2019
2 min read
Save for later

Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend

Melisha Dsouza
14 Feb 2019
2 min read
On 12th February, RedHat announced its plans to drop MongoDB from its Satellite system management solution. Satellite will now support only a single database - PostgreSQL. The move was made after the development team decided that a relational database with rollback and transactions was necessary for the features needed in Pulp and Satellite. The team says that PostgreSQL is a better solution in terms of the types of data and usage that Satellite requires. They say that a single database backend will also help to simplify the overall architecture of Satellite along with supportability, backup, and disaster recovery. Users will not suffer any significant performance impact with the removal of MongoDB nor will any features of Satellite be impacted because of the same. The embedded version of MongoDB will continue to be supported in the Satellite versions that it has already been released in. The Satellite team will create a patch for any issue that a user faces. Newer versions of MongoDB that are licensed under SSPL will not be used by Satellite. According to Dev Class, the concept of the SSPL has not been received well by the open source community. The Server Side Public License was MongoDB’s helped cloud service providers take the community edition of the database and offer it as service to paying customers. But anyone doing so should share the source code underlying the service. Following this news, Red Hat had also dropped MongoDB from Red Hat Enterprise Linux (RHEL) 8. This is because according to Tom Callaway, University outreach Team lead, Red Hat, SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be “Free” or “Open Source” causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The specific timeline of the change has not been released by the team, but this announcement was made simply to make users aware of the change that is coming. Uses can check the Satellite Blog to know more about this news. 4 reasons IBM bought Red Hat for $34 billion Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 13983

article-image-mozilla-partners-with-ubisoft-to-clever-commit-its-code-an-artificial-intelligence-assisted-assistant
Prasad Ramesh
13 Feb 2019
3 min read
Save for later

Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant

Prasad Ramesh
13 Feb 2019
3 min read
Yesterday, Mozilla announced a partnership with game developing company Ubisoft to develop Clever-Commit. It is an artificial intelligence based code assistant developed by Ubisoft La Forge. Ubisoft uses the assistant internally, and with this partnership, Firefox will try to find errors in their code. About 8,000 edits are made in every Firefox release by numerous developers. Using the assistant to save bugs on them can have a large scale effect in Firefox development. The assistant combines data from the bug tracking system and the codebase. Clever-Commit will analyze the changes in code as various developers commit code to the Firefox codebase. It then looks at the previously committed code to draw comparisons and find out buggy code. The developer is notified if Clever-Commit thinks that a code commit is not proper. This means that the bug could be fixed before a commit. It can even suggest solutions to the bugs it finds. Firefox uses C++, JavaScript, and Rust; Mozilla plans to use Clever-Commit for all of them to bring faster development. Clever-Commit is not open-source and there seem to be no immediate plans to make it freely available. But this ability to make inferences from large code bases is not exclusive to Clever-Commit. Microsoft has IntelliCode in Visual Studio which has examined many GitHub repositories for best coding methods etc, IntelliSense can also be used to find bugs in the code similar to Clever-Commit. Head of French division, Mozilla, Sylvestre Ledru said in a blog post: “With a new release every 6 to 8 weeks, making sure the code we ship is as clean as possible is crucial to the performance people experience with Firefox. The Firefox engineering team will start using Clever-Commit in its code-writing, testing and release process. We will initially use the tool during the code review phase, and if conclusive, at other stages of the code-writing process, in particular during automation. We expect to save hundreds of hours of bug riskiness analysis and detection. Ultimately, the integration of Clever-Commit into the full Firefox developer workflow could help catch up to 3 to 4 out of 5 bugs before they are introduced into the code.” Clever-Commit was originally displayed by Ubisoft as Commit Assistant last year. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Open letter from Mozilla Foundation and other companies to Facebook urging transparency in political ads The State of Mozilla 2017 report focuses on internet health and user privacy
Read more
  • 0
  • 0
  • 13549

article-image-go-1-12-release-candidate-1-is-here-with-improved-runtime-assembler-ports-and-more
Amrata Joshi
13 Feb 2019
3 min read
Save for later

Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more

Amrata Joshi
13 Feb 2019
3 min read
Yesterday, the team at Gophers released Go 1.12rc1, a release candidate version of Go 1.12. This release comes with improved runtime, updated libraries, ports and more. What’s new in Go 1.12rc1 Trace In Go 1.12rc1, the trace tool supports plotting mutator utilization curves, including cross-references to the execution trace. These are used to analyze the impact of the garbage collector on application latency and throughput. Assembler On arm64, the platform register was renamed from R18 to R18_PLATFORM to prevent accidental use, as the OS could choose to reserve this register. Runtime This release improves the performance of sweeping when a large fraction of the heap remains live. This reduces allocation latency following a garbage collection.The Go runtime now releases memory back to the operating system and particularly in response to large allocations that can't reuse existing heap space. In this release, the runtime’s timer and deadline code is faster and scales better with higher numbers of CPUs. It also improves the performance of manipulating network connection deadlines. Ports With this release, the race detector is now supported on linux/arm64. Go 1.12rc1 is supported on FreeBSD 10.x. Windows The new windows/arm port supports Go on Windows 10 IoT Core on 32-bit ARM chips such as the Raspberry Pi 3. AIX This release supports AIX 7.2 and later on POWER8 architectures (aix/ppc64). Though external linking, pprof, cgo, and the race detector aren't yet supported. Darwin This one is the last release to run on macOS 10.10 Yosemite, as Go 1.13 will need macOS 10.11 El Capitan or later. libSystem is now used while making syscalls on Darwin, which ensures forward-compatibility with future versions of macOS and iOS. This switch to libSystem has triggered additional App Store checks for private API usage. Tools The go tool vet is no longer supported. With this release, the go vet command has been rewritten to serve as the base for a range of different source code analysis tools. Even the external tools that use go tool vet must be changed to use go vet. Using go vet instead of go tool vet will work with all supported versions of Go. Even the experimental -shadow option is no longer available with go vet. Build cache requirement The build cache is now used for eliminating $GOPATH/pkg. With Go 1.12rc1, setting the environment variable GOCACHE=off will cause go commands to fail. Binary-only packages This one is the last release that will support binary-only packages. Cgo This release translates the C type EGLDisplay to the Go type uintptr. In this release, mangled C names are no longer accepted by the packages that use Cgo. The Cgo names are used now instead. Minor changes to the library Bufio: In this release, the reader's UnreadRune and UnreadByte methods will now return an error if they are called after Peek. Bytes: This release comes with a new function, ReplaceAll that returns a copy of a byte slice with all non-overlapping instances of a value replaced by another. To know more about this news, check out the official post. Introduction to Creational Patterns using Go Programming Go Programming Control Flow Essential Tools for Go Programming
Read more
  • 0
  • 0
  • 10839
article-image-android-things-is-now-more-inclined-towards-smart-displays-and-speakers-than-general-purpose-iot-devices
Amrata Joshi
13 Feb 2019
2 min read
Save for later

Android Things is now more inclined towards smart displays and speakers than general purpose IoT devices

Amrata Joshi
13 Feb 2019
2 min read
The Android Things platform was launched in 2018 to power third-party smart displays and other speakers. Last year, a number of major manufacturers, including, Lenovo, JBL, and LG Smart Displays, released Smart Displays and speakers powered by Android Things. With the success achieved because of the smart displays and speakers over the past year, Google is now refocusing Android Things as a platform for OEM partners to build devices in those categories with Assistant built-in. They announced an update to Android Things in a blog post by Dave Smith, Developer Advocate for IoT, yesterday. Android Things uses the Android Things SDK on top of hardware like the NXP i.MX7D and Raspberry Pi 3B. According to Google’s blog post, Android Things is a platform for experimenting with and building smart connected devices. System images will be available through the Android Things console where developers can easily create new builds and push app updates for up to 100 devices for non-commercial use. Though, support for production System on Modules (SoMs) based on NXP, Qualcomm, and MediaTek hardware won’t be available through the public developer platform currently. This refocus doesn’t seem to be on lines with Google’s original vision for Android Things which is Internet-of-Things. Even Google’s Internet-of-Things OS called Brillo got rebranded to Android Things in late 2016. The focus seems to be on smart displays and smart speakers now. https://twitter.com/stshank/status/1095434162977165312 Google’s official post states that the team will continue to provide the platform for IoT devices, including turnkey hardware solutions. It is also pushing developers interested in turnkey hardware solutions to Cloud IoT Core for secure device connectivity at scale and the upcoming Cloud IoT Edge runtime for managed edge computing services. Google hasn’t stated any reasons for the shift of Android Things from general purpose IoT devices to smart displays and speakers, but the rising competition could be one of the reasons. According to a few users, it is bad news as Google keeps killing its good projects. https://twitter.com/aliumujib/status/1095416461345124352 https://twitter.com/gregoriopalama/status/1095591673445433344 Google announces the general availability of a new API for Google Docs Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises o reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’
Read more
  • 0
  • 0
  • 16402

article-image-california-governor-wants-big-tech-companies-to-share-digital-dividends-with-consumers-in-the-state
Prasad Ramesh
13 Feb 2019
2 min read
Save for later

California governor wants big tech companies to share “digital dividends” with consumers in the state

Prasad Ramesh
13 Feb 2019
2 min read
This Tuesday, in his ‘State of the State’ speech, California Governor Gavin Newsom proposed a “digital dividend” according to a Bloomberg report. This proposal would allow California state consumers to share in the profits of multi-million dollar tech giants like Facebook and Alphabet Inc’s Google. In his speech, as reported by Bloomberg, Newsom said: “California’s consumers should also be able to share in the wealth that is created from their data. And so I’ve asked my team to develop a proposal for a new data dividend for Californians, because we recognize that data has value and it belongs to you.” He mentions that California is proud to be the home to tech giants (Silicon Valley). But also points out that companies make billions of dollars from using personal user data. He did not mention exactly what the dividend could be. Newsom took over office only about a month ago. He said that he has asked his staff to come up with a plan but did not disclose any details. His idea is that internet advertising giants like Facebook and Google should pay its consumers money for using their personal data. While this might seem like a great idea, big tech companies won’t really be ready to pay the consumers. They have repeatedly said through various hearings that they are able to provide the services for free because they use user data for advertising. And it is to be noted that this proposal is only for California residents, worldwide adoption is not even in the question. Some comments on Reddit seem skeptical about this idea: “And then those companies start charging for their service offsetting any ‘dividends’. Nothing is free folks.” “We will give you a $10/mo credit if you let us share your data. Oh by the way, your bill is going up $15/mo. We need to offset the costs of issuing credits.” EFF asks California Supreme Court to hear a case on government data accessibility and anonymization under CPRA California passes the U.S.’ first IoT security bill Microsoft and Cisco propose ideas for a Biometric privacy law after the state of Illinois passed one
Read more
  • 0
  • 0
  • 1462
Modal Close icon
Modal Close icon