Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-facebook-launches-libra-and-calibra-in-a-move-to-seriously-disrupt-the-financial-sector
Richard Gall
18 Jun 2019
6 min read
Save for later

Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector

Richard Gall
18 Jun 2019
6 min read
Facebook today announced that it's going to launch its own cryptocurrency, Libra, in what looks like an unprecedented move into the financial sector. At first glance Libra, due to go live in 2020, might seem like a chance for Facebook to just test the waters in a cryptocurrency market that's yet to see widespread adoption, but with the parallel launch of Calibra, a payment platform that sits on top of the cryptocurrency, it appears that Facebook plans to develop an entirely new ecosystem for digital transactions that is far bigger than anything that has come before. While media attention right now is focused on Libra, it's important to understand that what we're seeing from Facebook is an attempt to tackle a number of huge challenges. On the one hand, both Libra and Calibra could help to unify a fairly fragmented digital payment market, but it also opens up a whole new market of people who, lacking a bank account, might not even currently be part of the digital economy. The Libra Association as Facebook's foundation for the future of the digital economy To get to this point Facebook has had to collaborate with some of the biggest organizations in the world to construct a foundational infrastructure for its plans. The Libra Association is a nonprofit organization is made up of companies including PayPal, loan platform Kiva, Uber, and Lyft as well as many others - all of which have essentially brought into Facebook's vision. By grouping together, the Libra Association is perhaps the most serious attempt by a set of established companies and non-profits to make a cryptocurrency a success. The fact that Facebook has taken steps to form an institution around Libra underlines how important the project is to the company. Its also a tacit admission from Facebook that to build a truly successful ecosystem it must work with other organizations. It can't simply go it alone. Is Libra just like any other cryptocurrency? Libra isn't like other cryptocurrencies such as Bitcoin or Ethereum. It should, in theory at least, avoid the notorious volatility of cryptocurrencies by being "tied to a mix of global assets", as the Guardian explains. Libra isn't, then, strictly a decentralized currency. As the Reuters report highlights, "the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers." It's easy to gloss over this (Reuters has), but as the Financial Times' Alphaville publication points out, it's actually open to question to what extent Libra is actually built on Blockchain. Writing on Alphaville, journalist Jemima Kelly identifies sections of the Libra white paperthat are confusing and contradictory:  "...Unlike previous blockchains, which view the blockchain as a collection of blocks of transactions, the Libra Blockchain is a single data structure that records the history of transactions and states over time." In essence then, maybe not quite a Blockchain... There's probably some degree of smoke and mirrors at play. But given the challenging recent history of the cryptocurrency market, arguably this is the right move by Facebook and the Libra Association to keep some distance between Libra and 'standard' cryptocurrencies, while flirting with the hype that surrounds blockchain. Read next: Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey What is Calibra? Calibra is Facebook's payment platform that sits on top of Libra. The plan is for Calibra to become a cryptocurrency wallet that integrates with Facebook products. Initially it will be accessible as a button in Messenger and WhatsApp, but it could eventually help power transactions on Instagram - something that has long held promise but has never really been implemented in a way amenable to users. Calibra, then, will likely become the primary way that users experience Libra, especially on services like WhatsApp and Messenger. As Kevin Weil, Calibra's VP of Product said in an interview with Verge, "there’s a lot of overlap between the things you want out of a wallet for currency and the things you want out of a messaging app. As well as being accessible through Facebook products, Calibra will also be available as a standalone app that users can download. Can Calibra make digital transactions accessible to those without bank accounts? One of the potential benefits of Calibra is that it can be used by anyone with a smartphone. Most payment services, like PayPal, require a bank account - Calibra will make Libra accessible to millions of people who might typically only have access to cash (according to the Verge report, almost 1.7 billion people don't have a bank account). This brings a great swathe of the global population into the digital economy. From Facebook's perspective, enabling that becomes very valuable strategically. Indeed, it could elevate the organization to Amazon or Google levels of economic power. That's a long way off, but Calibra at least provides the company with a foundation. Is Libra and Calibra really such a good idea for Facebook? Facebook has faced significant scrutiny over the last few years. This means Libra and Calibra will both be entering the world at a tumultuous time for Facebook. Perhaps it's not that surprising - given all the challenges Facebook faces, this is a huge strategic initiative that could help the company re position itself as a core part of the digital economy's infrastructure, rather than simply a tool for misinformation and hate speech. Think of it this way - although both Google and Amazon are facing a number of issues ranging from privacy to working conditions, both companies largely appear to be withstanding difficult times. Whether Libra and Calibra actually work out for Facebook - and whether users actually use it - is another matter completely. While there's undoubtedly a demand for a more seamless payment service that goes one step further than the likes of PayPal, that guarantees both privacy and minimal friction, for Facebook to look at itself as the company to provide the solution to this enormous challenge takes considerable confidence and a big dash of hubris.
Read more
  • 0
  • 0
  • 11374

article-image-telegram-faces-massive-ddos-attack-suspects-link-to-the-ongoing-hong-kong-protest
Savia Lobo
14 Jun 2019
4 min read
Save for later

Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests

Savia Lobo
14 Jun 2019
4 min read
Telegram’s founder Pavel Durov shared his suspicion that the recent massive DDoS attack on his messaging service was made by the Chinese government. He also stated that this attack coincides with the ongoing Hong Kong protests where protesters used Telegram for their inter-communication to avoid detection as Telegram can function both in online as well as offline. https://twitter.com/durov/status/1138942773430804480 On Jun 12, a tweet from Telegram Messenger informed users that the messaging service was “experiencing a powerful DDoS attack”. It further said that this attack was flooding its servers with “garbage requests”, thus disrupting legitimate communications. Telegram allows people to send encrypted messages, documents, videos and pictures free of charge. Users can create groups for up to 200,000 people or channels for broadcasting to unlimited audiences. The reason for its growing popularity is due to its emphasis on encryption, which prevents many widely used methods of reading confidential communications. Hong Kong protests: A movement opposing the ‘extradition law’ On Sunday, around 1 million people demonstrated in the semi-autonomous Chinese city-state against amendments to an extradition law that would allow a person arrested in Hong Kong to face trial elsewhere, including in mainland China. “Critics fear the law could be used to cement Beijing’s authority over the semi-autonomous city-state, where citizens tend to have a higher level of civil liberties than in mainland China”, The Verge reports. According to The New York Times, “Hong Kong, a semi-autonomous Chinese territory, enjoys greater freedoms than mainland China under a "one country, two systems" framework put in place when the former British colony was returned to China in 1997. Hong Kong residents can freely surf the Internet and participate in public protests, unlike in the mainland.” To avoid surveillance and potential future prosecutions, these protestors disabled location tracking on their phones, bought train tickets using cash and refrained from having conversations on their social media. Many protesters masked their faces to avoid facial recognition and also avoided using public transit cards with a fear that it can be voluntarily linked to their identities, instead opting for paper tickets. According to France24, “Many of those on the streets are predominantly young and have grown up in a digital world, but they are all too aware of the dangers of surveillance and leaving online footprints.” Ben, a masked office worker at the protests, said he feared the extradition law would have a devastating impact on freedoms. "Even if we're not doing anything drastic -- as simple as saying something online about China -- because of such surveillance they might catch us," the 25-year-old told France24. The South China Morning Post first reported on the role the messaging app played in the protests when a Telegram group administrator was arrested for conspiracy to commit public nuisance. The alleged person “managed a conversation involving 30,000 members, is that he plotted with others to charge the Legislative Council Complex and block neighbouring roads”, SCMP reports. Bloomberg reported that protestors “relied on encrypted services to avoid detection. Telegram and Firechat -- a peer-to-peer messaging service that works with or without internet access -- are among the top trending apps in Hong Kong’s Apple store”. “Hong Kong’s Legislative Council suspended a review of the bill for a second day on Thursday amid the continued threat of protests. The city’s leader, Chief Executive Carrie Lam, is seeking to pass the legislation by the end of the current legislative session in July”, Bloomberg reports. Telegram also noted that the DDoS attack appears to have stabilized, and also assured users that their data is safe. https://twitter.com/telegram/status/1138781915560009735 https://twitter.com/telegram/status/1138777137102675969 Telegram explained the DDoS attack in an interesting way: A DDoS is a “Distributed Denial of Service attack”: your servers get GADZILLIONS of garbage requests which stop them from processing legitimate requests. Imagine that an army of lemmings just jumped the queue at McDonald’s in front of you – and each is ordering a whopper. The server is busy telling the whopper lemmings they came to the wrong place – but there are so many of them that the server can’t even see you to try and take your order. NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems Over 19 years of ANU(Australian National University) students’ and staff data breached All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 19853

article-image-mark-zuckerberg-is-a-liar-fraudster-unfit-to-be-the-c-e-o-of-facebook-alleges-aaron-greenspan-to-the-uk-parliamentary-committee
Vincy Davis
14 Jun 2019
10 min read
Save for later

Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan to the UK Parliamentary Committee

Vincy Davis
14 Jun 2019
10 min read
Last week, the Digital, Culture, Media and Sport Sub-Committee held its hearing on Disinformation with Aaron Greenspan as the witness. Aaron is the founder, president, and CEO of Think Computer Corporation, an IT consulting service and the author of the book ‘Authoritas: One Student's Harvard Admissions and the Founding of the Facebook Era’. Aaron who claims to have had the original idea for Facebook, has been a long standing critic of the social network. In January this year, Aaron published a 75-page report, in which he states that fake accounts made up more than half of Facebook's 2.2 billion users. He has also testified the same, in front of the UK Parliamentary Committee. He minced no words when he said Mark Zuckerberg is a liar, fraudster and unfit to be the CEO of Facebook. Fake accounts Facebook differentiates between duplicated accounts and fake accounts. But Aaron considers “any account which is not in your name and you have a second account for whatever purpose, that account is ultimately a fake account”. He noticed the issue of fake accounts on Facebook last year, after which he did his own enquiry and found some “alarming” numbers. At the end of 2017, Facebook claimed that “only 1% of their accounts are fake”. By their own definition, two weeks ago they have said that the number of “fake accounts have increased to 5%.” This means that in a span of two years, their estimate has increased fivefold. The second issue, Aaron highlights is that it's “extremely unclear” how they have arrived at that estimate. Two years ago, Facebook came up with a ‘Transparency portal’, due to public pressure. The transparency portal of Facebook aims at publishing “regular reports to give our community visibility into how we enforce policies, respond to data requests and protect intellectual property, while monitoring dynamics that limit access to Facebook products.” Aaron states that he found “very difficult to reconcile the SEC filings of Facebook with the numbers from the transparency portal”. He says that the number of fake accounts published on the transparency portal of Facebook do not match with the SEC filings. This is the reason why he decided to do his own report and find out if the numbers from the “two sources of Facebook aligned” and has come to the conclusion that they “don’t”. Instead Aaron has arrived at a conditional conclusion that “around 30% of accounts on Facebook are fake”. He claims that Facebook always minimizes their numbers to make the problem look smaller than it is. Also he adds that “Based on my total use of the platform, the historical trend starting in 2006, when it was made public, up until the present, it seemed like it was safe that 50% are fake and I think this could actually be higher.” Some weeks ago, Facebook “finally updated their transparency portal and announced that in the fourth quarter of 2018, that they have disabled 1.2 billion fake accounts. And in the first quarter of 2019, the numbers counted to 2.2 billion fake accounts, which is an exponential growth curve in fake accounts, according to their own numbers, which has not been audited by any respective body”. If all the numbers are added, “by a conservative guess, it can be said that there are 10billion fake accounts, for a platform which has 2.2 billion active users”. While “Facebook says that fake accounts don’t matter very much and within their undisclosed methodology, they are doing a good job.” Aaron believes that transparency around fake accounts is the number one problem for Facebook to look after. FB is more ‘unwilling’ rather than ‘unable’ to tackle Fake accounts When asked, if he thinks that “Facebook wants to tackle these problems of fake accounts and others”, Aaron answered “No”. He believes very strongly that Mark has no clear intention of complying with the law, as he’s not appearing in front of the parliament, in the UK or in Canada or anywhere where serious questions will be aimed at him. This is because, “he has no genuine answers in many cases. So i have no faith in Facebook, I don't think it can be trusted and don't think it should be trusted and i would suggest independent analysis.” The illusion of Behavioural advertising effectiveness Aaron claims that “Behavioural advertising produces 4% benefit of revenue to advertisers”. So while adexchangers like “Google or Facebook might charge 59% more on average for targeted ads, or upto 4900% more” in some cases, they are going to get customers as they get benefit from them. He also described Facebook as a "black box", and claimed advertisers were "in the dark" about how effective their campaigns actually are on Facebook and if it’s actually reaching real users. Aaron adds that “I have come to the conclusion that Facebook is not in anybody’s control. The company has lost its capability to control its own platform. And i don’t think they can truly regain that ability”, he thus likened the social network to the ‘Chernobyl disaster’, the largest catastrophic nuclear disaster to hit the world in 1986. In this particular situation, there was a technology which was hyped and was “expected to be transformative and make some other problems go away”. Aaron says that in 2004, Mark had described the Facebook system as “something that would involve the problem of reaching critical mass, a nuclear power reference”. Aaron believes that by designing Facebook, the way it is now, “Mark has effectively removed the control from the reactor core”, and the result is an “enormous uninhabitable zone in the internet, which is polluted with disinformation and falsity, much like radiation, that transpires nearly impossible reverse.” History of Facebook Aaron who has always claimed that he is the original creator of Facebook, says that “My need to create ‘Universal Facebook’, was based on Harvard’s structure as an organization, while Mark wanted to build something cool.” He also states that “certainly neither of us had thought of this to be a global encompassing system”. Aaron says that if he would have known that Mark was planning on “something like a huge global system”, he would have made it clear to Mark that “this could end up being a privacy nightmare”. He adds that “In the early days, Facebook lost control of the platform and it will never get it back”, and “unfortunately, the Media has been a significant member in propelling Facebook since the last 10-12 years.” Facebook is not growing Aaron believes that Mark has been lying to investors and the global community about Facebook’s growth, which is not true. He says that from indicators, as an outside observer, everything suggests that the usage of Facebook is falling drastically as users are having concerns about their privacy and yet it is being publicised that Facebook is growing. In reality, Facebook is growing in countries like India, Philippines, Vietnam, and Indonesia, which are the same countries where Facebook claims in their disclaimer that “we have more fake accounts coming from these countries than anyone else.” This means that Facebook is growing in countries where there is a known problem of ‘fake accounts’ and this is more problematic than the rest of the world. Aaron alleges that this misrepresentation of facts to shareholders amounts to “fraud”. On respecting Privacy and Personal Data Aaron states that Facebook used to lie initially that “Openness is a universal good”, while the new lie Facebook says is that “Encryption is the same as privacy,” which is not true, “and that privacy is the universal good.” So Aaron believes that Mark is going completely in contrast to his earlier beliefs, as the “previous model was not working for him anymore so he made a new model, and this is going to increase” further for every next level of Facebook. Aaron also adds that “Encryption comes with a lot of pitfalls.” On the question of whether Zuckerberg respects personal data, Aaron has claimed that Mark does not believe in the concept of Personal Data as he has been performing security fraud on a number of occasions, in an incredibly blatant manner. He states that “the SEC has done nothing about it because they are afraid of targeting a billionaire”. He also pointed out that Mark is not the only executive who lies to stockholders, and claimed that even other tech giants get away with this. For example, “Elon Musk does it.” When asked “if there were any warning signs of the Cambridge Analytica scandal”, prior to 2016. Aaron says that “This wasn’t so much a breach as it was a designed behaviour, and that design was made so on Mark’s orders”. Aaron also recalled that, in 2007, when he was working with Ed Baker, Mark Zuckerberg’s current colleague, he was planning to break the law with a technique, allegedly designed to steal customer data. He added that “When I worked with Ed, he suggested for the software that we were building that, we should ask users for access to their address book. And regardless of whether the answer was yes or no, we should take that data anyway, and use that to send emails to other potential users”. Aaron claims that “At that point, I quit”. Eventually, Ed joined Facebook and is part of its growth team now. Antitrust Law Debate If an antitrust action is taken against Facebook, it would result in Whatsapp and Instagram being separated from its mother platform. However, Mark will still be responsible for 12 billion of users data of Facebook. According to Aaron, this is a huge problem. And this is the reason he believes that “antitrust action against Facebook, is not going to be effective in the long run”. He also adds that some other major methods needs to be undertaken to make Facebook behave responsibly. How do we solve a problem like Facebook? Aaron has some ideas Aaron believes that regulating an entity as large and complex like Facebook requires technical knowledge, and “by large US regulators lack the technical knowledge, to effectively enforce the laws that are already on the book.” Aaron proposes a number of solutions for ways to regulate Facebook, out of which, the most important step he believed is “to remove Mark from the C.E.O. position.” He considers Mark incapable in being a responsible CEO to Facebook. Aaron’s recommendation comes at a time when this week, around 68% of independent investors wanted the company to have an independent chairman. Despite the revolt, the proposal was not passed as Mark owns upto 75% of Class B stock, i.e., he has almost 60% of the voting power at Facebook. Obviously, he and his colleagues voted down the independent Chairman proposal very smoothly. Next, Aaron suggested to regulate Facebook, along the lines of a government regulated bank. He proposes to have KYC requirements for all social media at this point of time. Also he adds that “I think anonymous speech should not be banned, as it plays an important role, but if there’s a problem in any case, then the anonymous person should be held to account.” He says that as “Transparency around fake accounts, is the number one problem faced by Facebook”, a ‘Social Media Tax’ should be levied on all its users. He believes that it will provide some revenue to fund investigative journalism, and due to payment involved, it will require some kind of authentication. This can play a major role in identification of fake accounts. He says that this will make “the entire process manageable for governments.” Check out the full hearing on the Parliament TV website. US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook argues it didn’t violate users’ privacy rights and thinks there’s no expectation of privacy because there is no privacy on social media Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech
Read more
  • 0
  • 0
  • 14428

article-image-4-ways-to-prepare-for-negotiating-your-first-offer-as-a-developer
Guest Contributor
14 Jun 2019
7 min read
Save for later

4 ways to prepare for negotiating your first offer as a developer

Guest Contributor
14 Jun 2019
7 min read
The future job outlook for developers is promising, at 24% projected growth between 2016 and 2026. Developers are becoming more and more in demand and their salaries worldwide are increasing, seeing a 2018 US median of $50 per hour. If you're a recent grad looking for work, negotiation classes can prime you to prepare for agreeing your first offer with these four tips. Understand your skill value What developer skills do you possess that are relevant to the post you're applying for? Your skills are a mix of your developer training (technical skills) and your people skills (soft skills). Base your negotiation on the measurable professional value you can provide to your organization. For instance, if you have some experience in sales, then prepare to demonstrate how this can add value to the business by better understanding your stakeholders’ needs, and perhaps promote what your department can deliver in non-technical language. If you’ve previously attended contract negotiation classes, you may offer additional value to the developer team at your new company. Understand what skills, qualities, and attitudes your employer values and position yourself accordingly. Research and prepare It’s important to thoroughly research what you can expect from your industry, company, and role. Having the knowledge that comes with extensive research allows you to be prepared when exploring your options and when deciding to agree on your first offer. Industry standards What's the median salary for entry-level developers in your area? What kind of perks can you expect? Familiarize yourself with the industry standards before you walk into your interview. Some great places to begin your research include: payscale.com indeed.com glassdoor.com Salary.com Company culture Another aspect you should consider is the company culture. To determine if a company’s culture is the right fit for you, consider: What working environments do I work best in? Do the company’s values align with my own? How are the interpersonal interactions between staff and management? What does my career trajectory look like, and can I achieve this with the company? Perks and benefits Research the perks that developers in your area of expertise and with your level of skill may expect. For example: Do developers get ongoing training? Are other employers offering equity in the company? This is highly relevant to tech startups. How many paid sick days and vacation days are you eligible for? Is there usually a probationary period for entry-level developers in your area? If so, you should be aware of the length of probation and perhaps negotiate the criteria against which your performance will be judged. What kind of medical and dental insurance can you expect? Do you get better perks and pay if you're a member of a professional association? Can you work from home? Are phone and computer included? Set your expectations right With your well-researched knowledge of industry standards, set your salary expectations ahead of the salary negotiation. Prior research can help you have a number in mind and a relevant contract negotiations course can help you use that number as your negotiation anchor. Your negotiation anchor acts as your reference point whenever you have salary discussions. If the offer is too low, you can decide to walk away and keep looking for better opportunities. If the offer is close to your anchor, use your negotiation training to improve your rate. If the company makes an offer that’s much higher than your salary expectations, give pause to understand the reason. Might you have undervalued yourself? Is the company expecting longer hours or more deliverables than its competitors? Does the company offer far fewer benefits, so that everything is built into their higher rate? While some recruiters may ask about your salary expectations during the initial screening process, some may hold off salary discussions until you've met face to face. Additionally, some recruiters will ask directly what you expect salary-wise, or they might ask you to respond to a salary range offer. Whichever method your recruiter or interviewer chooses, it’s best to either not give any number or wait until the interview. Alternatively, if you prefer to share a number, ensure you don’t allow your current or previous position to limit your salary aspirations. Do not share your current or previous salary unless this bolsters your aspirations. When you have a set salary expectation, it shows that: You know your worth. You are clear about the value you bring. You are confident. You are aware of the company’s pay scales for developers at entry level, mid-level, and high proficiency level. Know your strengths to match the required job role Expert negotiation classes equip trained developers to get to know the threats and opportunities their employers may be facing. You can leverage the information gathered to work out the best value exchange that can result in a win-win contract between you and your employer. Try and find out: Why does the company need a new developer? If the underlying need isn’t shared with you early on in your process, then ask. What is the company's budget? While you may not be able to get this number out of your boss, it’s worth asking. You will be surprised how often a company will divulge their ceiling. What is your prospective boss’ strategic goals for the year? Bosses love a team who do their best to make their boss look good. Where are the developer pain points faced by the company? Virtually every role is filled to address a problem or pain point. What are the competitors’ goals for the next year? This places you head and shoulders ahead of other developers who don’t think expansively enough. How much time and how many resources have the prospective employer invested in recruiting and retaining a developer of your skill level? If they have invested a lot of resources and time, then their motivation will likely be high to choose, and so they should be more prepared to be flexible in negotiating your remuneration package. It may be more difficult getting a high salary from a cash-strapped company. However, there's still room for negotiation if your developer skill set is going to result in an increase in the company's revenue or significantly reduce expenditure or risk. For instance, if you develop software that automates customer acquisition and increases marketing Return On Investment (ROI), then you can justify asking for a higher salary or other benefits.   Final thoughts Most developers find salary negotiations to be uncomfortable and awkward. Especially after getting a job offer after a protracted search in a competitive field. As a developer, don't let fear and uncertainty deter you from negotiating for the salary you deserve. You’re more likely to achieve a sizable increase in salary and benefits at your first negotiation than subsequent reviews once in the role. Research industry standards to set justifiable expectations. Know your worth and strategically use leverage to create a win-win relationship between you and your prospective employer. Author Bio: James Tighe is a long-time content creator and editor. Through his writings, James brings the best and most important lessons from negotiation classes in NYC to a business audience. He also enjoys the opportunity to work with skilled negotiators, integrating best practices into his own life.   Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey Does it make sense to talk about DevOps engineers or DevOps tools? Polyglot programming allows developers to choose the right language to solve tough engineering problems
Read more
  • 0
  • 0
  • 5907

article-image-austrian-supreme-court-rejects-facebooks-bid-to-stop-a-gdpr-violation-lawsuit-against-it-by-privacy-activist-max-schrems
Bhagyashree R
13 Jun 2019
5 min read
Save for later

Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems

Bhagyashree R
13 Jun 2019
5 min read
On Tuesday, the Austrian Supreme Court overturned Facebook’s appeal to block a lawsuit against it for not conforming to Europe’s General Data Protection Regulation (GDPR). This decision will also have an effect on other EU member states that give “special status to industry sectors.” https://twitter.com/maxschrems/status/1138703007594496000?s=19 The lawsuit was filed by Austrian lawyer and data privacy activist, Max Schrems. In the lawsuit, he has accused Facebook of using illegal privacy policies as it forces users to give their consent for processing their data in return for using the service. GDPR does not allow forced consent as a valid legal basis for processing user data. Schrems said in a statement, “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the ‘agree’ button–that’s not a free choice; it more reminds of a North Korean election process. Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.” Facebook has been trying to block this lawsuit by questioning whether GDPR-based cases fall under the jurisdiction of courts. According to Facebook’s appeal, these lawsuits should be handled by data protection authorities, Irish Data Protection Commissioner (DPC) in this case. Dismissing Facebook’s argument, this landmark decision says that any complaints made under Article 79 of GDPR can be reviewed both by judges and data protection authorities. This verdict comes as a sigh of relief for Schrems, who has to wait for almost 5 years to even get this lawsuit to trial because of Facebook's continuous blockade attempts. “I am very pleased that we were able to clarify this fundamental issue. We are hoping for a speedy procedure now that the case has been pending for a good 5 years," Schrems said in a press release. He further added, “If we win even part of the case, Facebook would have to adapt its business model considerably. We are very confident that we will succeed on the substance too now. Of course, they wanted to prevent such a case by all means and blocked it for five years.“ Previously, the Vienna Regional Court did give the verdict in Facebook’s favor declaring that it did not have jurisdiction and Facebook could only be sued in Ireland, where its European headquarters are. Schrems believes that this verdict was given because there is “a tendency that civil judges are not keen to have (complex) GDPR cases on their table.” Now, both the Appellate Court and the Austrian Supreme Court have agreed that everyone can file a lawsuit for GDPR violations. Schrems original idea was to make a “class action” style suit against Facebook by allowing any Facebook user to join the case. But, the court did not allow that, and Schemes' was limited to bring only a model case to the court. This is Schrems’ second victory this year in the fight against Facebook. Last month, the Irish Supreme court dismissed Facebook from stopping the referral of privacy case regarding the transfer of EU citizens’ data to the United States. The hearing of this case is now scheduled to happen at the European Court of Justice (ECJ) in July. Schrems’ eight-year-long battle against Facebook Schrems’ fight against Facebook started way before we all realized the severity of tech companies harvesting our personal data. Back in 2011, Shcrems’ professor at Santa Clara University invited Facebook’s privacy lawyer Ed Palmieri to speak to his class. Schrems was surprised to see the lawyer's lack of awareness regarding data protection laws in Europe. He then decided to write his thesis paper about Facebook’s misunderstanding of EU privacy laws. As a part of the research, he requested his personal data from Facebook and found it had his entire user history. He went on to make 22 complaints to the Irish Data Protection Commission, in which he accused Facebook of breaking European data protection laws. His efforts finally showed results, when in 2015 the European Court of Justice took down the EU–US Safe Harbor Principles. As a part of his fight for global privacy rights, Schrems also co-founded the European non-profit noyb (None of Your Business), which aims to "make privacy real”. The organization aims to introduce ways to execute privacy enforcement more effectively. It holds companies accountable who fail to follow Europe's privacy laws and also takes media initiatives to support GDPR. Looks like things hasn’t been going well for Facebook. Along with losing these cases in the EU, in a revelation yesterday by the WSJ, several emails were found that indicate Mark Zuckerberg’s knowledge of potentially problematic privacy practices at the company. You can read the entire press release on NOYB’s official website. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny  
Read more
  • 0
  • 0
  • 20613

article-image-getting-started-with-z-garbage-collectorzgc-in-java-11-tutorial
Vincy Davis
13 Jun 2019
9 min read
Save for later

Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial]

Vincy Davis
13 Jun 2019
9 min read
Java 11 includes a lot of improvements and changes in the GC(Garbage Collection) domain. Z Garbage Collector (ZGC) is scalable, with low latency. It is a completely new GC, written from scratch. It can work with heap memory, ranging from KBs to a large TB memory. As a concurrent garbage collector, ZGC promises not to exceed application latency by 10 milliseconds, even for bigger heap sizes. It is also easy to tune. It was released with Java 11 as an experimental GC. Work is in progress on this GC in OpenJDK and more changes can be expected over time. This article is an excerpt taken from the book, Java 11 and 12 - New Features, written by Mala Gupta. In this book, you will learn the latest developments in Java, right from variable type inference and simplified multithreading through to performance improvements, and much more. In this article, you will understand the need of ZGC, its features, its working, ZGC heap, ZGC phases, and colored pointers. Need for Z Garbage Collector One of the features that resulted in the rise of Java in the early days was its automatic memory management with its GCs, which freed developers from manual memory management and lowered memory leaks. However, with unpredictable timings and durations, garbage collection can (at times) do more harm to an application than good. Increased latency directly affects the throughput and performance of an application. With ever-decreasing hardware costs and programs engineered to use largish memories, applications are demanding lower latency and higher throughput from garbage collectors. ZGC promises a latency of no more than 10 milliseconds, which doesn't increase with heap size or a live set. This is because its stop-the-world pauses are limited to root scanning. Features of Z Garbage Collector ZGC brings in a lot of features, which have been instrumental in its proposal, design, and implementation. One of the most outstanding features of ZGC is that it is a concurrent GC. Other features include: It can mark memory and copy and relocate it, all concurrently. It also has a concurrent reference processor. As opposed to the store barriers that are used by another HotSpot GCs, ZGC uses load barriers. The load barriers are used to keep track of heap usage. One of the intriguing features of ZGC is the usage of load barriers with colored pointers. This is what enables ZGC to perform concurrent operations when Java threads are running, such as object relocation or relocation set selection. ZGC is more flexible in configuring its size and scheme. Compared to G1, ZGC has better ways to deal with very large object allocations. ZGC is a single-generation GC. It also supports partial compaction. ZGC is also highly performant when it comes to reclaiming memory and reallocating it. ZGC is NUMA-aware, which essentially means that it has a NUMA-aware memory allocator. Getting started with Z Garbage Collector Working with ZGC involves multiple steps. The JDK binary should be installed, which is specific to Linux/x64, and build and start it. The following commands can be used to download ZGC and build it on your system: $ hg clone http://hg.openjdk.java.net/jdk/jdk $ cd zgc $ sh configure --with-jvm-features=zgc $ make images After execution of the preceding commands, the JDK root directory can be found in the following location: g./build/linux-x86_64-normal-server-release/images/jdk Java tools, such as java, javac, and others can be found in the /bin subdirectory of the preceding path (its usual location). Let's create a basic HelloZGC class, as follows: class HelloZGC { public static void main(String[] args) { System.out.println("Say hello to new low pause GC - ZGC!"); } } The following command can be used to enable ZGC and use it: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC HelloZGC Since ZGC is an experimental GC, the user needs to unlock it using the runtime option, that is, XX:+UnlockExperimentalVMOptions. For enabling basic GC logging, the user can add the -Xlog:gc option. Detailed logging is helpful while fine-tuning an application. The user can enable it by using the -Xlog:gc* option  as follows: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -Xlog:gc* HelloZGC The previous command will output all the logs to the console, which could make it difficult to search for specific content. The user can specify the logs to be written to a file as follows: java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -Xlog:gc:mylog.log* HelloZGC Z Garbage Collector heap ZGC divides memory into regions, also called ZPages. ZPages can be dynamically created and destroyed. These can also be dynamically sized (unlike the G1 GC), which are multiples of 2 MB. Here are the size groups of heap regions: Small (2 MB) Medium (32 MB) Large (N * 2 MB) ZGC heap can have multiple occurrences of these heap regions. The medium and large regions are allocated contiguously, as shown in the following diagram: Unlike other GCs, the physical heap regions of ZGC can map into a bigger heap address space (which can include virtual memory). This can be crucial to combat memory fragmentation issues. Imagine that the user can allocate a really big object in memory, but can't do so due to unavailability of contiguous space in memory. This often leads to multiple GC cycles to free up enough contiguous space. If none are available, even after (multiple) GC cycle(s), the JVM will shut down with OutOfMemoryError. However, this particular use case is not an issue with the ZGC. Since the physical memory maps to a bigger address space, locating a bigger contiguous space is feasible. Z Garbage Collector phases A GC cycle of ZGC includes multiple phases: Pause Mark Start Pause Mark End Pause Relocate Start In the first phase, Pause Mark Start, ZGC marks objects that have been pointed to by roots. This includes walking through the live set of objects, and then finding and marking them. This is by far one of the most heavy-duty workloads in the ZGC GC cycle. Once this completes, the next cycle is Pause Mark Start, which is used for synchronization and starts with a short pause of 1 ms. In this second phase, ZGC starts with reference processing and moves to week-root cleaning. It also includes the relocation set selection. ZGC marks the regions it wants to compact. The next step, Pause Relocate Start, triggers the actual region compaction. It begins with root scanning pointing into the location set, followed by the concurrent reallocation of objects in the relocation set. The first phase, that is, Pause Mark Start, also includes remapping the live data. Since marking and remap of live data is the most heavy-duty GC operation, it isn't executed as a separate one. Remap starts after Pause Relocate Start but overlaps with the Pause Mark Start phase of the next GC cycle. Colored pointers Colored pointers are one of the core concepts of ZGC. It enables ZGC to find, mark, locate, and remap the objects. It doesn't support x32 platforms. Implementation of colored points needs virtual address masking, which could be accomplished either in the hardware, operating system, or software. The following diagram shows the 64-bit pointer layout: As shown in the preceding diagram, the 64-bit object reference is divided as follows: 18 bits: Unused bits 1-bit: Finalizable 1-bit: Remapped 1-bit: Marked1 1-bit: Marked0 42 bits: Object Address The first 18 bits are reserved for future use. The 42 bits can address up to 4 TB of address space. Now comes the remaining, intriguing, 4 bits. The Marked1 and Marked0 bits are used to mark objects for garbage collection. By setting the single bit for Remapped, an object can be marked not pointing to into the relocation set. The last 1-bit for finalizing relates to concurrent reference processing. It marks that an object can only be reachable through a finalizer. When the user runs ZGC on a system, it will be notice that it uses a lot of virtual memory space, which is not the same as the physical memory space. This is due to heap multi-mapping. It specifies how the objects with the colored pointers are stored in the virtual memory. As an example, for a colorless pointer, say, 0x0000000011111111, its colored pointers would be 0x0000100011111111 (remapped bit set), 0x0000080011111111 (Marked1 bit set), and 0x0000040011111111 (Marked0 bit set). The same physical heap memory would map to three different locations in address space, each corresponding to the colored pointer. This would be implemented differently when the mapping is handled differently. Tuning Z Garbage Collector To get the optimal performance,  a heap size must be set up, that can not only store the live set of your application but also has enough space to service the allocations. ZGC is a concurrent garbage collector. By setting the amount of CPU time that should be assigned to ZGC threads, the user can control how often the GC kicks in. It can be done so by using the following option: -XX:ConcGCThreads=<number> A higher value for the ConcGCThreads option will leave less amount of CPU time for your application. On the other hand, a lower value may result in your application struggling for memory; your application might generate more garbage than what is collected by ZGC. ZGC can also use default values for ConcGCThreads. To fine-tune your application on this parameter, you might prefer to execute against test values. For advanced ZGC tuning, the user can also enable large pages for enhanced performance of your application. It can be done by using the following option: -XX:+UseLargePages Instead of enabling large pages, the user can also enable transparent huge pages by using the following option: -XX:+UseTransparentHugePage The preceding option also includes additional settings and configurations, which can be accessed by using ZGC's official wiki page. ZGC is a NUMA-aware GC. Applications executing on the NUMA machine can result in a noticeable performance gain. By default, NUMA support is enabled for ZGC. However, if the JVM realizes that it is bound to a subset in the JVM, this feature can be disabled. To override a JVM's decision, the following option can be used: -XX:+UseNUMA Summary We have briefly discussed the scalable, low latency GC for OpenJDK—ZGC. It is an experimental GC, which has been written from scratch. As a concurrent GC, it promises max latency to be less than 10 milliseconds, which doesn't increase with heap size or live data. At present, it only works with Linux/x64. More platforms can be supported in the future if there is considerable demand for it. To know more about the applicability of Java's new features, head over to the book, Java 11 and 12 – New Features. Using lambda expressions in Java 11 [Tutorial] Creating a simple modular application in Java 11 [Tutorial] Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 41172
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 10903

article-image-how-to-push-docker-images-to-aws-elastic-container-registryecr-tutorial
Savia Lobo
12 Jun 2019
12 min read
Save for later

How to push Docker images to AWS' Elastic Container Registry(ECR) [Tutorial]

Savia Lobo
12 Jun 2019
12 min read
Currently, the most commonly adopted way to store and deliver Docker images is through Docker Registry, an open source application by Docker that hosts Docker repositories. This application can be deployed on-premises, as well as used as a service from multiple providers, such as Docker Hub, Quay.io, and AWS ECR. This article is an excerpt taken from the book Kubernetes on AWS written by Ed Robinson. In this book, you will discover how to utilize the power of Kubernetes to manage and update your applications. In this article, you will learn how to use Docker for pushing images onto ECR. The application is a simple, stateless service, where most of the maintenance work involves making sure that storage is available, safe, and secure. As any seasoned system administrator knows, that is far from an easy ordeal, especially, if there is a large data store. For that reason, and especially if you're just starting out, it is highly recommended to use a hosted solution and let someone else deal with keeping your images safe and readily available. ECR is AWS's approach to a hosted Docker registry, where there's one registry per account. It uses AWS IAM to authenticate and authorize users to push and pull images. By default, the limits for both repositories and images are set to 1,000. Creating a repository To create a repository, it's as simple as executing the following aws ecr command: $ aws ecr create-repository --repository-name randserver This will create a repository for storing our randserver application. Its output should look like this: { "repository": { "repositoryArn": "arn:aws:ecr:eu-central-1:123456789012:repository/randserver", "registryId": "123456789012", "repositoryName": "randserver", "repositoryUri": "123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver", "createdAt": 1543162198.0 } } A nice addition to your repositories is a life cycle policy that cleans up older versions of your images so that you don't eventually get blocked from pushing a newer version. This can be achieved as follows, using the same aws ecr command: $ aws ecr put-lifecycle-policy --registry-id 123456789012 --repository-name randserver --lifecycle-policy-text '{"rules":[{"rulePriority":10,"description":"Expire old images","selection":{"tagStatus":"any","countType":"imageCountMoreThan","countNumber":800},"action":{"type":"expire"}}]}' This particular policy will start cleaning up once have more than 800 images on the same repository. You could also clean up based on the images, age, or both, as well as consider only some tags in your cleanup. Pushing and pulling images from your workstation In order use your newly-created ECR repository, first we're going to need to authenticate your local Docker daemon against the ECR registry. Once again, aws ecr will help you achieve just that: aws ecr get-login --registry-ids 123456789012 --no-include-email This will output a docker login command that will add a new user-password pair for your Docker configuration. You can copy-paste that command, or you can just run it as follows; the results will be the same: $(aws ecr get-login --registry-ids 123456789012 --no-include-email) Now, pushing and pulling images is just like using any other Docker registry, using the outputted repository URI that we got when creating the repository: $ docker push 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1 $ docker pull 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1 Setting up privileges for pushing images IAM users' permissions should allow your users to perform strictly only the operations they actually need to, in order to avoid any possible mistakes that might have a larger area of impact. This is also true for ECR management, and to that effect, there are three AWS IAM managed policies that greatly simplify achieving it: AmazonEC2ContainerRegistryFullAccess: This allows a user to perform any operation on your ECR repositories, including deleting them, and should therefore be left for system administrators and owners. AmazonEC2ContainerRegistryPowerUser: This allows a user to push and pull images on any repositories, which is very handy for developers that are actively building and deploying your software. AmazonEC2ContainerRegistryReadOnly: This allows a user to pull images on any repository, which is useful for scenarios where developers are not pushing their software from their workstation, and are instead just pulling internal dependencies to work on their projects. All of these policies can be attached to an IAM user as follows, by replacing the policy name at the end of the ARN with a suitable policy  and pointing --user-name to the user you are managing: $ aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly --user-name johndoe All these AWS managed policies do have an important characteristic—all of them add permissions for all repositories on your registry. You'll probably find several use cases where this is far from ideal—maybe your organization has several teams that do not need access over each other's repositories; maybe you would like to have a user with the power to delete some repositories, but not all; or maybe you just need access to a single repository for Continuous Integration (CI) setup. If your needs match any of these described situations, you should create your own policies with as granular permissions as required. First, we will create an IAM group for the developers of our randserver application: $ aws iam create-group --group-name randserver-developers { "Group": { "Path": "/", "GroupName": "randserver-developers", "GroupId": "AGPAJRDMVLGOJF3ARET5K", "Arn": "arn:aws:iam::123456789012:group/randserver-developers", "CreateDate": "2018-10-25T11:45:42Z" } } Then we'll add the johndoe user to the group: $ aws iam add-user-to-group --group-name randserver-developers --user-name johndoe Now we'll need to create our policy so that we can attach it to the group. Copy this JSON document to a file: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:DescribeImages", "ecr:BatchGetImage", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload", "ecr:PutImage" ], "Resource": "arn:aws:ecr:eu-central-1:123456789012:repository/randserver" }] } To create the policy, execute the following, passing the appropriate path for the JSON document file: $ aws iam create-policy --policy-name EcrPushPullRandserverDevelopers --policy-document file://./policy.json { "Policy": { "PolicyName": "EcrPushPullRandserverDevelopers", "PolicyId": "ANPAITNBFTFWZMI4WFOY6", "Arn": "arn:aws:iam::123456789012:policy/EcrPushPullRandserverDevelopers", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "2018-10-25T12:00:15Z", "UpdateDate": "2018-10-25T12:00:15Z" } } The final step is then to attach the policy to the group, so that johndoe and all future developers of this application can use the repository from their workstation: $ aws iam attach-group-policy --group-name randserver-developers --policy-arn arn:aws:iam::123456789012:policy/EcrPushPullRandserverDevelopers Use images stored on ECR in Kubernetes By attaching  the IAM policy, AmazonEC2ContainerRegistryReadOnly, to the instance profile used by our cluster nodes, allows our nodes to fetch any images in any repository in the AWS account where the cluster resides. In order to use an ECR repository in this manner, you should set the image field of the pod template on your manifest to point to it, such as in the following example: image: 123456789012.dkr.ecr.eu-central-1.amazonaws.com/randserver:0.0.1. Tagging images Whenever a Docker image is pushed to a registry, we need to identify the image with a tag.  A tag can be any alphanumeric string: latest stable v1.7.3 and even c31b1656da70a0b0b683b060187b889c4fd1d958 are both perfectly valid examples of tags that you might use to identify an image that you push to ECR. Depending on how your software is developed and versioned, what you put in this tag might be different. There are three main strategies that might be adopted depending on different types of applications and development processes that we might need to generate images for. Version Control System (VCS) references When you build images from software where the source is managed in a version control system, such as Git, the simplest way of tagging your images, in this case, is to utilize the commit ID (often referred to as an SHA when using Git) from your VCS. This gives you a very simple way to check exactly which version of your code is currently running at any one time. This first strategy is often adopted for applications where small changes are delivered in an incremental fashion. New versions of your images might be pushed multiple times a day and automatically deployed to testing and production-like environments. Good examples of these kinds of applications that are web applications and other software delivered as a service. By pushing a commit ID through an automated testing and release pipeline, you can easily generate deployment manifests for an exact revision of your software. Semantic versions However, this strategy becomes more cumbersome and harder to deal with if you are building container images that are intended to be used by many users, whether that be multiple users within your organisation or even when you publish images publicly for third parties to use. With applications like these, it can be helpful to use a semantic version number that has some meaning, helping those that depend on you image decide if it safe to move to a newer version. A common scheme for these sorts of images is called Semantic Versioning (SemVer). This is a version number made up of three individual numbers separated by dots. These numbers are known as the MAJOR, MINOR, and PATCH version. A semantic version number lays out these numbers in the form MAJOR.MINOR.PATCH. When a number is incremented, the less significant numbers to the right are reset to 0. These version numbers give downstream users useful information about how a new version might affect compatibility: The PATCH version is incremented whenever a bug or security fix is implemented that maintains backwards compatibility The MINOR version is incremented whenever a new feature is added that maintains backwards compatibility Any changes that break backwards compatibility should increment the MAJOR version number This is useful because users of your images know that MINOR or PATCH level changes are unlikely to break anything, so only basic testing should be required when upgrading to a new version. But if upgrading to a new MAJOR version, they ought to check and test the impact on the changes, which might require changes to configuration or integration code. Upstream version numbers Often, when we when build container images that repackage existing software, it is desirable to use the original version number of the packaged software itself. Sometimes, it can help to add a suffix to version the configuration that you're using to package that software with. In larger organizations, it can be common to package software tools with configuration files with organisation-specific default settings. You might find it useful to version the configuration files as well as the software tool. If I were packaging the MySQL database for use in my organization, an image tag might look like 8.0.12-c15, where 8.0.12 refers to the upstream MySQL version and c15 is a version number I have created for the MySQL configuration files included in my container image. Labelling images If you have an even moderately complex workflow for developing and releasing your software, you might quickly find yourself wanting to add even more semantic information about your images into its tag than just a simple version number. This can quickly become unwieldy, as you will need to modify your build and deployment tooling whenever you want to add some extra information. Thankfully, Docker images carry around labels that can be used to store whatever metadata is relevant to your image. Adding a label to your image is done at build time, using the LABEL instruction in your Dockerfile. The LABEL instruction accepts multiple key value pairs in this format: LABEL <key>=<value> <key>=<value> ... Using this instruction, we can store any arbitrary metadata that we find useful on our images. And because the metadata is stored inside the image, unlike tags, it can't be changed. By using appropriate image labels, we can discover the exact revision from our VCS, even if an image has been given an opaque tag, such as latest or stable. If you want to set these labels dynamically at build time, you can also make use of the ARG instruction in your Dockerfile. Let's look at an example of using build arg's to set labels. Here is an example Dockerfile: FROM scratch ARG SHA ARG BEAR=Paddington LABEL git-commit=$GIT_COMMIT \ favorite-bear=$BEAR \ marmalade="5 jars" When we build the container, we can pass values for our labels using the --build-arg flag. This is useful when we want to pass dynamic values such as a Git commit reference: docker build --build-arg SHA=`git rev-parse --short HEAD` -t bear . As with the labels that Kubernetes allows you to attach to the objects in your cluster, you are free to label your images with whatever scheme you choose, and save whatever metadata makes sense for your organization. The Open Container Initiative (OCI), an organization that promotes standards for container runtimes and their image formats, has proposed a standard set of labels that can be used to provide useful metadata that can then be used by other tools that understand them. If you decide to add labels to your container images, choosing to use part or all of this set of labels might be a good place to start. To know more about these labels, you can head over to our book. Summary In this article, we discovered how to push images from our own workstations, how to use IAM permissions to restrict access to our images, and how to allow Kubernetes to pull container images directly from ECR. To know more about how to deploy a production-ready Kubernetes cluster on the AWS platform, and more, head over to our book Kubernetes on AWS. All Docker versions are now vulnerable to a symlink race attack GAO recommends for a US version of the GDPR privacy laws Cloud pricing comparison: AWS vs Azure
Read more
  • 0
  • 0
  • 56094

article-image-polyglot-programming-allows-developers-to-choose-the-right-language-to-solve-tough-engineering-problems
Richard Gall
11 Jun 2019
9 min read
Save for later

Polyglot programming allows developers to choose the right language to solve tough engineering problems

Richard Gall
11 Jun 2019
9 min read
Programming languages can divide opinion. They are, for many engineers, a mark of identity. Yes, they say something about the kind of work you do, but they also say something about who you are and what you value. But this is changing, with polyglot programming becoming a powerful and important trend. We’re moving towards a world in which developers are no longer as loyal to their chosen programming languages as they were. Instead, they are more flexible and open minded about the languages they use. This year’s Skill Up report highlights that there are a number of different drivers behind the programming languages developers use which, in turn, imply a level of contextual decision making. Put simply, developers today are less likely to stick with a specific programming language, and instead move between them depending on the problems they are trying to solve and the tasks they need to accomplish. Download this year's Skill Up report here. [caption id="attachment_28338" align="aligncenter" width="554"] Skill Up 2019 data[/caption] As the data above shows, languages aren’t often determined by organizational requirements. They are more likely to be if you’re primarily using Java or C#, but that makes sense as these are languages that have long been associated with proprietary software organizations (Oracle and Microsoft respectively); in fact, programming languages are often chosen due to projects and use cases. The return to programming language standardization This is something backed up by the most recent ThoughtWorks Radar, published in April. Polyglot programming finally moved its way into the Adopt ‘quadrant’. This is after 9 years of living in the Trial quadrant. Part of the reason for this, ThoughtWorks explains, is that the organization is seeing a reaction against this flexibility, writing that “we're seeing a new push to standardize language stacks by both developers and enterprises.” The organization argues - quite rightly - that , “promoting a few languages that support different ecosystems or language features is important for both enterprises to accelerate processes and go live more quickly and developers to have the right tools to solve the problem at hand.” Arguably, we’re in the midst of a conflict within software engineering. On the one hand the drive to standardize tooling in the face of increasingly complex distributed systems makes sense, but it’s one that we should resist. This level of standardization will ultimately remove decision making power from engineers. What’s driving polyglot programming? It’s probably worth digging a little deeper into why developers are starting to be more flexible about the languages they use. One of the most important drivers of this change is the dominance of Agile as a software engineering methodology. As Agile has become embedded in the software industry, software engineers have found themselves working across the stack rather than specializing in a specific part of it. Full-stack development and polyglot programming This is something suggested by Stack Overflow survey data. This year 51.9% of developers described themselves as full-stack developers compared to 50.0% describing themselves as backend developers. This is a big change from 2018 where 57.9% described themselves as backend developers compared to 48.2% of respondents calling themselves full-stack developers. Given earlier Stack Overflow data from 2016 indicates that full-stack developers are comfortable using more languages and frameworks than other roles, it’s understandable that today we’re seeing developers take more ownership and control over the languages (and, indeed, other tools) they use. With developers sitting in small Agile teams working more closely to problem domains than they may have been a decade ago, the power is now much more in their hands to select and use the programming languages and tools that are most appropriate. If infrastructure is code, more people are writing code... which means more people are using programming languages But it's not just about full-stack development. With infrastructure today being treated as code, it makes sense that those responsible for managing and configuring it - sysadmins, SREs, systems engineers - need to use programming languages. This is a dramatic shift in how we think about system administration and infrastructure management; programming languages are important to a whole new group of people. Python and polyglot programming The popularity of Python is symptomatic of this industry-wide change. Not only is it a language primarily selected due to use case (as the data above shows), it’s also a language that’s popular across the industry. When we asked our survey respondents what language they want to learn next, Python came out on top regardless of their primary programming language. [caption id="attachment_28340" align="aligncenter" width="563"] Skill Up 2019 data[/caption] This highlights that Python has appeal across the industry. It doesn’t fit neatly into a specific job role, it isn’t designed for a specific task. It’s flexible - as developers today need to be. Although it’s true that Python’s popularity is being driven by machine learning, it would be wrong to see this as the sole driver. It is, in fact, its wide range of use cases ranging from scripting to building web services and APIs that is making Python so popular. Indeed, it’s worth noting that Python is viewed as a tool as much as it is a programming language. When we specifically asked survey respondents what tools they wanted to learn, Python came up again, suggesting it occupies a category unlike every other programming language. [caption id="attachment_28341" align="aligncenter" width="585"] Skill Up 2019 data[/caption] What about other programming languages? The popularity of Python is a perfect starting point for today’s polyglot programmer. It’s relatively easy to learn, and it can be used for a range of different tasks. But if we’re to convincingly talk about a new age of programming, where developers are comfortable using multiple programming languages, we have to look beyond the popularity of Python at other programming languages. Perhaps a good way to do this is to look at the languages developers primarily using Python want to learn next. If you look at the graphic above, there’s no clear winner for Python developers. While every other language is showing significant interest in Python, Python developers are looking at a range of different languages. This alone isn’t evidence of the popularity of polyglot programming, but it does indicate some level of fragmentation in the programming language ‘marketplace’. Or, to put it another way, we’re moving to a place where it becomes much more difficult to say that given languages are definitive in a specific field. The popularity of Golang Go has particular appeal for Python programmers with almost 20% saying they want to learn it next. This isn’t that surprising - Go is a flexible language that has many applications, from microservices to machine learning, but most importantly can give you incredible performance. With powerful concurrency, goroutines, and garbage collection, it has features designed to ensure application efficiency. Given it was designed by Google this isn’t that surprising - it’s almost purpose built for software engineering today. It’s popularity with JavaScript developers further confirms that it holds significant developer mindshare, particularly among those in positions where projects and use cases demand flexibility. Read next: Is Golang truly community driven and does it really matter? A return to C++ An interesting contrast to the popularity of Go is the relative popularity of C++ in our Skill Up results. C++ is ancient in comparison to Golang, but it nevertheless seems to occupy a similar level of developer mindshare. The reasons are probably similar - it’s another language that can give you incredible power and performance. For Python developers part of the attraction is down to its usefulness for deep learning (TensorFlow is written in C++). But more than that, C++ is also an important foundational language. While it isn’t easy to learn, it does help you to understand some of the fundamentals of software. From this perspective, it provides a useful starting point to go on and learn other languages; it’s a vital piece that can unlock the puzzle of polyglot programming. A more mature JavaScript JavaScript also came up in our Skill Up survey results. Indeed, Python developers are keen on the language, which tells us something about the types of tasks Python developers are doing as well as the way JavaScript has matured. On the one hand, Python developers are starting to see the value of web-based technologies, while on the other JavaScript is also expanding in scope to become much more than just a front end programming language. Read next: Is web development dying? Kotlin and TypeScript The appearance of other smaller languages in our survey results emphasises the way in which the language ecosystem is fragmenting. TypeScript, for example, may not ever supplant JavaScript, but it could become an important addition to a developer’s skill set if they begin running into problems scaling JavaScript. Kotlin represents something similar for Java developers - indeed, it could even eventually out pace its older relative. But again, it’s popularity will emerge according to specific use cases. It will begin to take hold in particular where Java’s limitations become more exposed, such as in modern app development. Rust: a goldilocks programming language perfect for polyglot programming One final mention deserves to go to Rust. In many ways Rust’s popularity is related to the continued relevance of C++, but it offers some improvements - essentially, it’s easier to leverage Rust, while using C++ to its full potential requires experience and skill. Read next: How Deliveroo migrated from Ruby to Rust without breaking production One commenter on Hacker News described it as a ‘Goldilocks’ language - “It's not so alien as to make it inaccessible, while being alien enough that you'll learn something from it.” This is arguably what a programming language should be like in a world where polyglot programming rules. It shouldn’t be so complex as to consume your time and energy, but it should also be sophisticated enough to allow you to solve difficult engineering problems. Learning new programming languages makes it easier to solve engineering problems The value of learning multiple programming languages is indisputable. Python is the language that’s changing the game, becoming a vital additional extra to a range of developers from different backgrounds, but there are plenty of other languages that could prove useful. What’s ultimately important is to explore the options that are available and to start using a language that’s right for you. Indeed, that’s not always immediately obvious - but don’t let that put you off. Give yourself some time to explore new languages and find the one that’s going to work for you.
Read more
  • 0
  • 0
  • 24377

article-image-deep-learning-models-have-massive-carbon-footprints-can-photonic-chips-help-reduce-power-consumption
Sugandha Lahoti
11 Jun 2019
10 min read
Save for later

Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?

Sugandha Lahoti
11 Jun 2019
10 min read
Most of the recent breakthroughs in Artificial Intelligence are driven by data and computation. What is essentially missing is the energy cost. Most large AI networks require huge number of training data to ensure accuracy. However, these accuracy improvements depend on the availability of exceptionally large computational resources. The larger the computation resource, the more energy it consumes. This  not only is costly financially (due to the cost of hardware, cloud compute, and electricity) but is also straining the environment, due to the carbon footprint required to fuel modern tensor processing hardware. Considering the climate change repercussions we are facing on a daily basis, consensus is building on the need for AI research ethics to include a focus on minimizing and offsetting the carbon footprint of research. Researchers should also put energy cost in results of research papers alongside time, accuracy, etc. The process of deep learning outsizing environmental impact was further highlighted in a recent research paper published by MIT researchers. In the paper titled “Energy and Policy Considerations for Deep Learning in NLP”, researchers performed a life cycle assessment for training several common large AI models. They quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and provided recommendations to reduce costs and improve equity in NLP research and practice. They have also provided recommendations to reduce costs and improve equity in NLP research and practice. Per the paper, training AI models can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself). It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster. Source This speaks volumes about the carbon offset and brings conversation to the returns on heavy (carbon) investment of deep learning and if it is really worth the marginal improvement in predictive accuracy over cheaper, alternative methods. This news alarmed people tremendously. https://twitter.com/sakthigeek/status/1137555650718908416 https://twitter.com/vinodkpg/status/1129605865760149504 https://twitter.com/Kobotic/status/1137681505541484545         Even if some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern. This is because the current energy is derived from carbon-neural sources in many locations, and even when renewable energy is available, it is limited to the equipment produced to store it. The carbon footprint of NLP models The researchers in this paper adhere specifically to NLP models. They looked at four models, the Transformer, ELMo, BERT, and GPT-2, and trained each on a single GPU for up to a day to measure its power draw. Next, they used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. This number was then converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon’s AWS, the largest cloud services provider. Source The researchers found that environmental costs of training grew proportionally to model size. It exponentially increased when additional tuning steps were used to increase the model’s final accuracy. In particular, neural architecture search had high associated costs for little performance benefit. Neural architecture search is a tuning process which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error. The researchers also noted that these figures should only be considered as baseline. In practice, AI researchers mostly develop a new model from scratch or adapt an existing model to a new data set, both require many more rounds of training and tuning. Based on their findings, the authors recommend certain proposals to heighten the awareness of this issue to the NLP community and promote mindful practice and policy: Researchers should report training time and sensitivity to hyperparameters. There should be a standard, hardware independent measurement of training time, such as gigaflops required to convergence. There should also be a  standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers should get equitable access to computation resources. This trend toward training huge models on tons of data is not feasible for academics, because they don’t have the computational resources. It will be more cost effective for academic researchers to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. Researchers should prioritize computationally efficient hardware and algorithms. For instance, developers could aid in reducing the energy associated with model tuning by providing easy-to-use APIs implementing more efficient alternatives to brute-force. The next step is to introduce energy costs as a standard metric, that researchers are expected to report their findings. They should also try to minimise carbon footprint by developing compute efficient training methods such as new ML algos, or new engineering tools to make existing ones more compute efficient. Above all, we need to formulate strict public policies that steer digital technologies toward speeding a clean energy transition while mitigating the risks. Another factor which contributes to high energy consumptions are Optical neural networks which are used for most deep learning tasks. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks. However, these AI accelerators use electricity and have a theoretical minimum limit for energy consumption. Also, most present day ASICs are based on CMOS technology and suffer from the interconnect problem. Even in highly optimized architectures where data are stored in register files close to the logic units, a majority of the energy consumption comes from data movement, not logic. Analog crossbar arrays based on CMOS gates or memristors promise better performance, but as analog electronic devices, they suffer from calibration issues and limited accuracy. Implementing chips that use light instead of electricity Another group of MIT researchers have developed a “photonic” chip that uses light instead of electricity, and consumes relatively little power in the process. The photonic accelerator uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. Practical applications for such chips can also include reducing energy consumption in data centers. “In response to vast increases in data storage and computational capacity in the last decade, the amount of energy used by data centers has doubled every four years, and is expected to triple in the next 10 years.” https://twitter.com/profwernimont/status/1137402420823306240 The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers. How the photonic chip works? The researchers have given a detailed explanation of the chip’s working in their research paper, “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication”. The chip relies on a compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. This technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals. Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel.  Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on. Limitation of Photonic accelerators Photonic accelerators generally have an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater accuracy. Less input light increases efficiency but negatively impacts the neural network’s performance. The ideal condition is achieved when AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers. Traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient. In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. Tech companies are the largest contributors of carbon footprint The realization that training an AI model can produce emissions equivalent to a five cars, should make carbon footprint of artificial intelligence an important consideration for researchers and companies going forward. UMass Amherst’s Emma Strubell, one of the research team and co-author of the paper said, “I’m not against energy use in the name of advancing science, obviously, but I think we could do better in terms of considering the trade off between required energy and resulting model improvement.” “I think large tech companies that use AI throughout their products are likely the largest contributors to this type of energy use,” Strubell said. “I do think that they are increasingly aware of these issues, and there are also financial incentives for them to curb energy use.” In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. This full-fledged AI system has features including continuous monitoring and human override. Recently Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. https://twitter.com/AkwyZ/status/1137020554567987200 Amazon had announced that it would power data centers with 100 percent renewable energy without a dedicated timeline. Since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent.  It has also not announced any new deals to supply clean energy to its data centers since 2016, according to a report by Greenpeace, and it quietly abandoned plans for one of its last scheduled wind farms last year. In April, over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. However, Amazon rejected all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting. Both these studies’ researchers illustrate the dire need to change our outlook towards building Artificial Intelligence models and chips that have an impact on the carbon footprint. However, this does not mean halting the research of AI altogether. Instead there should be an awareness of the environmental impact that training AI models might have. Which in turn can inspire researchers to develop more efficient hardware and algorithms for the future. Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change. Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models. Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 25447
article-image-microsofts-xbox-team-at-e3-2019-project-scarlett-ai-powered-flight-simulator-keanu-reeves-in-cyberpunk-2077-and-more
Bhagyashree R
11 Jun 2019
6 min read
Save for later

Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more

Bhagyashree R
11 Jun 2019
6 min read
On Sunday at E3 2019, Microsoft made some really big announcements that had the audience screaming. These included release date of Project Scarlett, Xbox One successor, more than 60 game trailers, Keanu Reeves humbling the stage for promoting Cyberpunk 2077, and much more. E3, which stands for Electronic Entertainment Expo, is one of the biggest gaming events of the year. Its official dates are June 11-13, however, these dates are just for the shows happening at Los Angeles Convention Center. The press conferences were held on June 8 and 9. Along with hosting the world premiere of several computer and video games, this event also showcases new hardware and software products that take the gaming experience to the next level. Here are some of the highlights from Microsoft’s press conference: Project Scarlett will arrive in fall 2020 with Halo infinite Rumors have been going around about the next-generation of Xbox since December last year. Putting all these rumors to rest, Microsoft officially announced that Project Scarlett is planned to release during fall next year. The tech giant further shared that the next big upcoming space war game, Halo Infinite will launch alongside Project Scarlett. According to Microsoft, we can expect this new device to be four times more powerful than Xbox One X. It includes a custom designed CPU based on AMD’s Zen 2 and Radeon RDNA architecture. It supports 8K gaming, framerates of 120fps, and ray-tracing. The device will also include a non-mechanical SSD hard drive enabling faster game loads than its older mechanical hard drives. https://youtu.be/-ktN4bycj9s xCloud will open for public trials in October, one month ahead of Google’s Stadia After giving a brief live demonstration of its upcoming xCloud game streaming service in March, Microsoft announced that it will be available to the public in October this year. This announcement seems to be a direct response to Google’s Stadia, which was revealed in March and will make its public debut in November. Along with sharing the release date, the tech giant also gave E3 attendees the first hands-on trial of the service. At the event, Xbox chief Phil Spencer said, “Two months ago we connected all Xbox developers to Project xCloud. Today, we invite those of you here at E3 for our first public hands-on of Project xCloud. To experience the freedom to play right here at the show.” Microsoft built xCloud to provide gamers with a new way to play Xbox games where the gamers decide how and when they want to play. With xCloud Console Streaming you will be able to “turn your Xbox One into your own personal and free xCloud server.” It will enable you to stream entire Xbox One library including games from Xbox Game Pass to any device of your choice. https://twitter.com/Xbox/status/1137833126959280128 Xbox Elite 2 Wireless Controller to reach you on November 4th for $179.99 Microsoft announced the launch of Xbox Elite Wireless Controller Series 2, which it says is the totally re-engineered version of the previous Elite controller. It is open for pre-orders now and will be available on November 4th in 24 countries, priced at $179.99. The controller’s new adjustable tension thumbsticks provide improved precision and shorter hair trigger locks enable you to fire faster. The device includes USB-C support, Bluetooth, and a rechargeable battery that lasts for up to 40 hours per charge. Along with all these updates, it also allows you to do limitless customizations with the Xbox Accessories app on Xbox One and Windows 10 PC. https://youtu.be/SYVw0KqQiOI Cyberpunk 2077 featuring Keanu Reeves to release on April 16th, 2020 Last year, CD Projekt Red, the creator of Cyberpunk 2077 said that E3 2019 will be its “most important E3” ever and we cannot agree more. Keanu Reeves aka John Wick himself came to announce the release date of Cyberpunk 2077, which is April 16th, 2020. The trailer of the game ended with the biggest surprise for the audience: the appearance of Reeves’ as a character apparently named “Mr. Fusion.” The crowd went wild as soon as Reeves took to the stage to promote Cyberpunk 2077. When the actor said that walking in the streets of Cyberpunk 2077 will be breathtaking, a guy from the crowd yelled, "you're breathtaking." To which Reeves kindly replied: https://twitter.com/Xbox/status/1137854943006605312 The guy from the crowd was YouTuber Peter Sark, who shared on Twitter that "Keanu Reeves just announced to the world that I'm breathtaking." https://twitter.com/petertheleader/status/1137846108305014784 CD Projekt Red is now giving him a free collector’s edition copy of the game, which is amazing! For everyone else, don’t be upset as you can also pre-order Cyberpunk 2077’s physical and collector's edition from their official website. Though as xCloud, attendees will not be able to get a hands-on trial now, they will still be able to see the demo presentation. The demo is happening at the South Hall in the LA Convention Center, booth 1023, on June 11-13th. The new Microsoft Flight Simulator is powered by Azure cloud AI Microsoft showcased a new installment of its long-running Microsoft Flight Simulator series. Powered by Azure cloud artificial intelligence and satellite data, this updated simulator is capable of rendering amazingly real visuals. Though not many details have been shared, its trailer shows a stunning real-time 4K footage of lifelike landscapes and aircraft. Have a look at it yourself! https://youtu.be/ReDDgFfWlS4 Though this simulator has been PC-only in the past, this newly updated simulator is coming to Xbox One and will also be available via Xbox Game Pass. The specific release dates are unknown but they're expected to be out next year. Double Fine joins Xbox Game Studios At the event, Tim Schafer, the founder of Double Fine, shared that his company has now joined Microsoft’s ever-growing gaming studio. Double Fine Productions is the studio behind games like Psychonauts, Brutal Legend, Broken Age. He jokingly said, "For the last 19 years, we've been independent. Then Microsoft came to us and said, 'What if we gave you a bunch of money.' And I said 'OK, yeah.'" Schafer posted another video on YouTube explaining what this means for the company’s existing commitments. He shared that Psychonauts 2 will be provided to crowdfunders on the platforms they chose, but going forward the company will focus on "Xbox, Game Pass, and PC.” https://youtu.be/uR9yKz2C3dY These were just a few key announcements from the event. To know more, you can watch Microsoft keynote on YouTube: https://www.youtube.com/watch?v=zeYQ-kPF0iQ 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 28817

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 33073

article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 5086
article-image-businesses-need-to-learn-how-to-manage-cloud-costs-to-get-real-value-from-serverless-and-machine-learning-as-a-service
Richard Gall
10 Jun 2019
7 min read
Save for later

Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service

Richard Gall
10 Jun 2019
7 min read
This year’s Skill Up survey threw a spotlight on the challenges developers and engineering teams face when it comes to cloud. Indeed, it even highlighted the extent to which cloud is still a nascent trend for many developers, even though it feels so mainstream within the industry - almost half of respondents aren’t using cloud at all. But for those that do use cloud, the survey results also illustrated some of the specific ways that people are using or plan to use cloud platforms, as well as highlighting the biggest challenges and mistakes organisations are making when it comes to cloud. What came out as particularly important is that the limitations and the opportunities of cloud must be thought of together. With our research finding that cost only becomes important once a cloud platform is being used, it’s clear that if we’re to successfully - and cost effectively - use the cloud platforms we do, understanding the relationship with cost and opportunity over a sustained period of time (rather than, say, a month) is absolutely essential. As one of our respondents told us “businesses are still figuring out how to leverage cloud computing for their business needs and haven't quite got the cost model figured out.” Why does cost pose such a problem when it comes to cloud computing? In this year’s survey, we asked people what their primary motivations for using cloud are. The key motivators were use case and employment (ie. the decision was out of the respondent’s hands), but it was striking to see cost as only a minor consideration. Placed in the broader context of discussions around efficiency and a tightening global market, this seemed remarkable. It appears that people aren’t entering the cloud marketplace with cost as a top consideration. In contrast however, this picture changes when we asked respondents about the biggest limiting factors for their chosen cloud platforms. At this point, cost becomes a much more important factor. This highlights that the reality of cloud costs only become apparent - or rather, becomes more apparent - once a cloud platform is implemented and being used. From this we can infer that there is a lack of strategic planning in cloud purchasing. It’s almost as if technology leaders are falling into certain cloud platforms based on commonplace assumptions about what’s right. This then has consequences further down the line. We need to think about cloud cost and functionality together The fact that functionality is also a key limitation is also important to note here - in fact, it is actually closely tied up with cost, insofar as the functionality of each respective cloud platform is very neatly defined by its pricing structure. Take serverless, for example - although it’s typically regarded as something that can be cost-effective for organizations, it can prove costly when you start to scale workloads. You might save more money simply by optimizing your infrastructure. What this means in practice is that the features you want to exploit within your cloud platform should be approached with a clear sense of how it’s going to be used and how it’s going to fit in the evolution of your business and technology in the medium and long term future. Getting the most from leading cloud trends There were two distinct trends that developers identified as the most exciting: machine learning and serverless. Although both are very different, they both hold a promise of efficiency. Whether that’s the efficiency in moving away from traditional means of hosting to cloud-based functions to powerful data processing and machine-led decision making at scale, the fundamentals of both trends are about managing economies of scale in ways that would have been impossible half a decade ago. This plays into some of the issues around cost. If serverless and machine learning both appear to offer ways of saving on spending or radically driving growth, when that doesn’t quite turn out in the way technology purchasers expected it would, the relationship between cost and features can become a little bit strained. Serverless The idea that serverless will save you money is popular. And in general, it is inexpensive. The pricing structures of both AWS and Azure make Functions as a Service (FaaS) particularly attractive. It means you’ll no longer be spending money on provisioning compute resources you don’t actually need, with your provider managing the necessary elasticity. Read next: The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers However, as we've already seen, serverless doesn't guarantee cost efficiency. You need to properly understand how you're going to use serverless to ensure that it's not costing you big money without you realising it. One way of using it might be to employ it for very specific workloads, allowing you to experiment in a relatively risk-free manner before employing it elsewhere - whatever you decide, you must ensure that the scope and purpose of the project is clear. Machine learning as a Service Machine learning - or deep learning in particular - is very expensive to do. This is one of the reasons that machine learning on cloud - machine learning as a service - is one of the most attractive features of many cloud platforms. But it’s not just about cost. Using cloud-based machine learning tools also removes some of the barriers to entry, making it easier for engineers who don’t necessarily have extensive training in the field to actually start using machine learning models in various ways. However, this does come with some limitations - and just as with serverless, you really do need to understand and even visualize how you’re going to use machine learning to ensure that you’re not just wasting time and energy with machine learning cloud features. You need to be clear about exactly how you’re going to use machine learning, what data you’re going to use, where it’s going to be stored, and what the end result should look like. Perhaps you want to embed machine learning capabilities inside an app? Or perhaps you want to run algorithms on existing data to inform internal decisions? Whatever it is, all these questions are important. These types of questions will also impact the type of platform you select. Google’s Cloud Platform is far and away the go-to platform for machine learning (this is one of the reasons why so many respondents said their motivation for using it was use case), but bear in mind that this could lead to some issues if the bulk of your data is typically stored on, say, AWS - you’ll need to build some kind of integration, or move your data to GCP (which is always going to be a headache). The hidden costs of innovation These types of extras are really important to consider when it comes to leveraging exciting cloud features. Yes you need to use a pricing calculator and spend time comparing platforms, but factoring additional development time to build integrations or move things is something that a calculator clearly can’t account for. Indeed, this is true in the context of both machine learning and serverless. The organizational implications of your purchases are perhaps the most important consideration and one that’s often the easiest to miss. Control the scope and empower your team However, although the organizational implications aren’t necessarily problems to be resolved - they could well be opportunities that you need to embrace. You need to prepare and be ready for those changes. Ultimately, preparation is key when it comes to leveraging the benefits of cloud. Defining the scope is critical and to do that you need to understand what your needs are and where you want to get to. That sounds obvious, but it’s all too easy to fall into the trap of focusing on the possibilities and opportunities of cloud without paying careful consideration to how to ensure it works for you. Read the results of Skill Up 2019. Download the report here.
Read more
  • 0
  • 0
  • 24641

article-image-what-elon-musk-can-teach-us-about-futurism-technology-forecasting
Craig Wing
07 Jun 2019
14 min read
Save for later

What Elon Musk can teach us about Futurism & Technology Forecasting

Craig Wing
07 Jun 2019
14 min read
Today, you can’t build a resilient business without robust technology forecasting. If you want to future-proof your business and ensure that it’s capable of adapting to change, looking ahead to the future in a way that’s both methodical and thoughtful is vital. There are no shortage of tales that attest to this fact. Kodak and Blackberry are two of the best known examples, but one that lingers in my mind is Nokia. This is a guest post by Craig Wing, futurist and speaker working at the nexus of leadership, strategy, exponential organizations and corporate culture. Follow Craig on Twitter @wingnuts123 or connect with him on LinkedIn here. Nokia’s failure to forecast the future When it was acquired by Microsoft back in 2013, Nokia was worth 2.9% of its market cap high of $250 billion. Back then, in the year 2000, it held a 30.6 market share in the mobile market - 17.3% more than Motorola. In less than two decades it had gone from an organization widely regarded as a pinnacle of both engineering and commercial potency, to one that was complacent, blithely ignoring the reality of an unpredictable future that would ultimately lead to its demise. “We didn’t do anything wrong” Nokia CEO Stephen Elop said in a press conference just before the company was acquired by Microsoft, “but somehow we still lost.” Although it’s hard not to sympathize with Elop, his words nevertheless bring to mind something Bill Gates said: “Success is a lousy teacher, it seduces smart people into thinking they can’t lose.” But what should you do to avoid complacency? Focus on the process of thinking, not its content Unfortunately, it’s not as straightforward as simply looking forward to the trends and changes that appear to be emerging on the horizon. That’s undoubtedly important, and it’s something you certainly should be doing, but again this can cause a new set of problems. You could be the most future-focused business leader on the planet, but if all you’re focused on is what’s going to be happening rather than why it is - and, more importantly, why it’s relevant to you - you’re going to eventually run into the same sort of problems as Nokia. This is a common problem I’ve noticed with many clients in many different industries across the globe. There is a recurring tendency to be passive in the face of the future. Instead of seeing it as something they can create and shape in a way that’s relevant to them, they see it as a set of various trends and opportunities that may or may not impact their organisations. They’re always much more interested in what they should be thinking about rather than how they should be thinking. This is particularly true for those who have a more deterministic view, where they believe everything is already planned out - that type of thinking can be dangerous as well as a little pessimistic. It’s almost as if you’re admitting you have no ability to influence the future. For the rest of this post I’m going to show you new forecasting techniques for thinking about the future. While I’m primarily talking about technology forecasting, these forecasting techniques can be applied to many different domains. You might find them useful for thinking about the future of your business more generally. How to rethink technology forecasting and planning for the future Look backwards from the future The cone of possibility The cone of possibility is a common but flawed approach to forecasting. Essentially it extrapolates the future from historical fact. It’s a way of thinking that says this is what’s happening now, which means we can assume this is going to happen in the future. While this may seem like a common sense approach, it can cause problems. At the most basic level, it can be easy to make mistakes - when you use the present as a cue to think about the future, there’s a big chance that your perspective will in someway be limited. Your understanding of something might well appear sound, but perhaps there’s an important bit of context that’s missing from your analysis. But there are other issues with this approach, too: The cone of possibility approach misses the ‘why’ behind events and developments. It puts you in a place where you’re following others, almost as if you’re trying to keep up with your neighbors, which, in turn, means you only understand the surface elements of a particular trend rather than the more sophisticated drivers behind it. Nokia had amassed a market lead with its smartphones based on the Symbian operating system, only to lose out to Apple’s touchscreen iPhone. This is a great example of a company failing to understand the “why” behind a trend - that customers wanted a new way to interact with their devices that went beyond the traditional keyboard. It’s also an approach that means you’ll always be playing catch up. You can bet that the largest organizations are months, if not years, ahead of you in the R&D stakes, which means actually building for the future becomes a game that’s set by market leaders. It’s no longer one that you’re in charge of. The thrust of impossibility However, there is an alternative - something that I call the thrust of impossibility. To properly understand the concept of the thrust of impossibility, it’s essential to appreciate the fact that the future isn’t determined. Yes there are known knowns from which we can extrapolate future events, but there are also known unknowns and unknown unknowns that are beyond our control. This isn’t something that should scare you, but it can instead be something you can use to your advantage. If we follow the cone of possibility, the market would almost continue in its current state, right? It works by looking backwards from a fixed point in the future. From this perspective, it is a more imaginative approach that requires us to expand the limits of what we believe is possible and then understand the route by which that end point can be reached. This process of ‘future mapping’ frees us from the “cone of possibility” and the boundary conditions and allow us to conceptualize a plethora of opportunities. I like to think of this as creating memories from the future. In more practical terms, it allows us to recalibrate our current position according to where we want to be. The benefit of this is that this form of technology forecasting gives direction to our current business strategy. It also allows us to amend our current trajectory if it appears to be doomed for failure by showing how far off we actually are. A good example of this approach to the future can be seen in Elon Musk’s numerous businesses. Viewed through the cone of possibility, his portfolio of companies don’t really make sense: Tesla, Solar City, SpaceX, The Boring Company – none fit within the framework of the cone. However, when viewed backwards from the “thrust of impossibility” – we can easily see how these seemingly disparate pieces link together as part of a grander vision. A lesson from conservation: pay attention to risk Another way of thinking about the future and technology forecasting can be illustrated by a problem currently facing my native South Africa - rhinoceros poaching. Nearly 80% of the world’s rhinos live in South Africa; the country has been hit hard by poachers criminals, with more than 1,000 rhinos killed each year between 2013 and 2017 (approximately 3 per day). [caption id="attachment_28268" align="alignright" width="300"] via savetherhino.org[/caption] Due to the severity of the situation, there are a number of possible interventions that authorities are using to curb the slaughter. Many involve the tracking of the rhino themselves and then deploying trackers and game rangers to protect them. However, the problem with this approach is that if the systems that monitor the geo-location of the rhinos are infiltrated, the hackers will then know the exact locale of the endangered species. Poachers can then use this defensive methodology to their own advantage. The alternative... As an alternative, progressive game farms realised they could monitor “early sensors” in the savanna by tracking other animals that would flee in the presence of poachers. These animals, like zebras, giraffes, and springbok, are of little value to poachers, but would scatter in their presence. By monitoring the movements of these “early detection” herds, conservationists were better able to not only track the presence of poachers in the vicinity of rhinos’ but their general movement. These early, seemingly vastly different, sensor animals are ones that poachers see no value in; but the conservationists (and rhinos) see immense value in their prediction systems. Likewise, for leaders we need to ensure we have the sensors which are able to orient us to the danger of our current reality. When we monitor only our “rhinos,” we as conservationists may actually be doing more harm by releasing early indicators into the competitive marketplace or causing us to be myopic in our approach of hedging up our businesses. The sensors we select must be outside of our field of expertise (like the different game animals) lest we, like the conservationists, seek a solution from only one particular vantage point. Think about the banking sector: if they selected sensors who only view the financial sector, they would likely have missed the rise of mobile payments and cryptocurrencies. Not only must these sensors be outside of our domain but they also must be able to explore and partner with other companies along the journey. By the nature of their selection, they should not be experts in that domain, but they should be able to provoke and question the basis of decisions from first principles thinking. By doing this you are effectively enlarging the cone of possibility, creating insights into known unknowns and unknown unknowns. This is very different to the way consultants are used today. Technology consultants are expected to know the what of the future and draft appropriate strategies, without necessarily focusing on the broader context surrounding a clients needs (well, they should do that, but many do not…). In turn, this approach implies consultants must draft something different from the current approach, and likely follow an approach constrained by the cone of possibility originating from the client’s initial conditions. Technology forecasting becomes something passive, starting from a fixed point. Don't just think about segments - think about them dynamically Many of the business tools taught in business schools today, such as SWOT, PESTLE, Porter’s five forces, are sufficient at mapping current market conditions (magnitude) but are unable to account for the forward direction of travel and changing markets. They offer snapshots, and provide a foundation for vector thinking but they lack the dynamism required to help us manage change over a sustained period of time. In the context of today's fast moving world, this makes technology forecasting and strategic planning very difficult. This means we need to consider the way plans - and the situations they’re meant to help us navigate - can shift and change, to give us the ability to pivot based on market conditions. How do we actually do this? Well, we need to think carefully about the ‘snapshots’ that form the basis of our analysis. For example, the time they are taken, how frequently they are taken will impact how helpful they are for formulating a more coherent long term strategy. Strategies and plans that are only refreshed annually will yield an imperfect view of the total cone of possibility. Moreover, while quarterly plans will yield greater resolution images, these are still not sufficient in market places that are accelerating faster. Indeed, it might sound like a nightmare to have business leaders tweaking plans constantly - and it is! The practical steps are instead to decentralise control away from central planning offices and allow those who are actually executing on the strategy the freedom to move with haste to meet customer demands and address shifting market conditions. Trust those closest to problems, and trust those closest to customers to set and revise plans accordingly - but make sure there are clear communication channels so leadership understands what is happening. In the context of technology and software engineering, this maps on nicely to ideas around Agile and Lean - by building teams that are more autonomous and closely connected to the products and services they are developing, change can happen much more quickly, ensuring you can adapt to change in the market. Quantum business: remember that you’re dead and alive at the same time Quantum theory has been attracting a lot of attention over the last few years. Perhaps due in part to The Big Bang Theory, and maybe even the more recent emergence of quantum computing, the idea that a cat can be both dead and alive at the same time depending on the fact of our observing it (as Schrodinger showed in his famous thought experiment), is one that is simultaneously perplexing, intriguing, and even a little bit amusing. The concept actually has a lot of value for businesses thinking about the future. Indeed, it's an idea that complements technology forecasting. This is because in an increasingly connected world, the various dependencies that exist across value chains, customer perceptions, and social media ecosystems means that, like Schrodinger’s cat, we cannot observe part of a system without interfering with it in some way. If we accept that premise, then we must also accept that ultimately the way we view (and then act on) the market will, subsequently, affect the entire market as well. While very few businesses have the resources of Elon Musk, what’s remarkable is that he has managed to shift the entire auto-manufacturing sector from the internal combustion engine to electric. He’s done this by doing much more than simply releasing various Tesla vehicles (Toyota and others had a greater lead time); he’s managed to redefine the entire sector through autonomous manufacturing, Gigafactory battery centres, and “crowdsourced” marketing, among other innovations. Try as they might, the established players will never be able to turn back the clock. This is the new normal. As mentioned earlier, Nokia missed the entire touch screen revolution initiated by Apple in 2008. In the same year, Google launched the Android operating system. Nokia profits plummeted by 30%, while sales decreased 3.1%. Meanwhile iPhone sales grew by 330%. The following year (2009), as a result of the changing marketplace and unable to keep pace with these two new entrants, Nokia reduced its workforce by 1,700 employees. It finally realized it was too slow to react to changing shifting dynamics - the cat’s state of being was now beyond its own control – and Nokia was surpassed by Apple, Blackberry and new non-traditional players like Samsung, HTC and LG. Nokia is not the only giant to be dethroned, the average time spent by a company in the S&P500 has dropped from 33 years in 1965 to 20 years in 1990 and only 14 years by 2026. Half will be gone in 10 years. Further, only 12% of the Fortune 500 remain after 61 years. The remaining 88% have either gone bankrupt, merged or acquired or simply fallen off the list. From 91 companies (revenue over $1 billion) across more than 20 industries, executives were asked: "What is your organization's biggest obstacle to transform in response to market change and disruption?" Forty percent cited "day-to-day decisions" that essentially pay the bill but "undermine our stated strategy to change." Herein lies the biggest challenge for leaders in a quantum business world: your business is simultaneously dead and alive at any given time. Every day, you as a leader make decisions to decide if it lives or dies. If you decide not to, your competitors are making the same decisions and every individual decision cumulatively adds to the entire system being shifted. Put simply, in a quantum world where everything is connected, and where ambivalence appears to rule, decision making is crucial - it forms the foundations from which more forward thinking technology forecasting can take shape. If you don’t put the care and attention into the strategic decisions you make - and the analysis on which all smart ones depend - you fall into a trap where you’re at the mercy of unpredictability. And no business should be the victim of chance.
Read more
  • 0
  • 0
  • 16984
Modal Close icon
Modal Close icon