Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-4-reasons-ibm-bought-red-hat-for-34-billion
Richard Gall
29 Oct 2018
8 min read
Save for later

4 reasons IBM bought Red Hat for $34 billion

Richard Gall
29 Oct 2018
8 min read
The news that IBM is to buy Red Hat - the enterprise Linux distribution - shocked the software world this weekend. It took many people by surprise because it signals a weird new world where the old guard of tech conglomerates - almost prehistoric in the history of the industry - are revitalizing themselves by diving deep into the open source world for pearls. So, why did IBM decide to buy Red Hat? And why has it spent so much to do it? Why did IBM decide to buy Red Hat? For IBM this was an expensive step into a new world. But they wouldn't have done it without good reason. And although it's hard to center on one single reason that forced IBM's decision makers to put money on the table, there are certainly a combination of factors that meant this move simply makes sense from IBM's perspective. Here are 4 reasons why IBM is buying Red Hat: Competing in the cloud market Disappointment around the success of IBM Watson Catching up with Microsoft To help provide support for an important but struggling Linux organization Let's take a look at each of these in more detail. IBM wants to get serious about cloud computing IBM has been struggling in a competitive cloud market. It's not exactly out of the running, with some reports placing them in third after AWS and Microsoft Azure, and others in fourth, with Google's cloud offering above them. But wherever the company stands, it's true that it is not growing at anywhere near the rate of its competitors. Put simply, if it didn't act, IBM would lose significant ground in the cloud computing race. It's no coincidence that cloud was right at the top of the IBM press release. Ginni Rometty, IBM Chairman, President and Chief Executive Officer, is quoted as saying "The acquisition of Red Hat is a game-changer. It changes everything about the cloud market... IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses." Clearly, IBM wants to bring itself up to date. As The Register wrote when they covered the story on Sunday IBM "really, really, really wants to transform itself into a cool and trendy hybrid cloud platform, rather than be seen eternally as a maintainer of legacy mainframes and databases." But why buy Red Hat? You might still be thinking well, why does IBM need Red Hat to do all this? Can't it just do it itself? It ultimately comes down to expanding what businesses can do with cloud - and bringing an open source company into the IBM family will allow IBM to deliver much more effectively on this than they have before. AWS appears to implicitly understand that features and capabilities are everything when it comes to cloud - to be truly successful, IBM needs to adopt both an open source mindset and toolset to innovate at a fast pace. This is what Rometty is referring to when she talks about "the next chapter of the cloud." This is where cloud becomes more about "extracting more data and optimizing every part of the business, from supply chains to sales" than storage space. IBM's artificial intelligence product, Watson, hasn't taken off IBM is a company with its proverbial finger in many pies. Its artificial intelligence product, Watson, hasn't had the success that the company expected. Instead, it has suffered a number of disappointing setbacks this year, resulting in Deborah DiSanzo, the head of Watson Health, stepping down just a week ago. One of the biggest stories was MD Anderson Cancer Center stepping away from a contract with IBM, after a report by analysts at investment bank Jeffries claimed that the software was "not ready for human investigational or clinical use." But there are other stories too - all of which come together to paint a picture of a project that doesn't live up to or deliver on its hype. By contrast, AI has been most impactful as a part of a cloud product. Just look at the furore around the AI tools within AWS - there's no way government agencies and the military would be quite so interested in the product if it wasn't packaged in a way that could be easily deployed. AWS, unlike IBM, understood that AI is only worth the hype if organizations can use it easily. In effect, we're past the period where AI deserves hype on its own - it needs to be part of a wider suite of capabilities that enable innovation and invention with minimal friction. If IBM is to offer out Watson's capabilities to a wide portion of users, all with varying use cases, IBM can begin to think much more about how the end product can deliver the biggest impact for these individual cases. IBM is playing catch up with Microsoft in terms of open source IBM's move might be surprising, but in the context of Microsoft's transformation over the last decade, it's part of a wider pattern. The only difference is that Microsoft's attitude to open source has slowly thawed, whereas IBM has gone all out, taking an unexpected leap into the unknown. It's a neat coincidence that this was the weekend that GitHub officially became part of Microsoft. It's as if IBM saw Microsoft basking in the glow of an open source embrace and thought we want that. Envy aside, there are serious implications. The future is now quite clearly open source - in fact, it has been for some time. You might even say that Microsoft hasn't been as quick as it could have been. But for IBM, open source has been seen simply as a tasty slice of the software pie - great, but not the whole thing. This was a misunderstanding - open source is everything. It almost doesn't even make sense to talk about open source as if it were distinctive from everything else - it is software today. It's defining the future. Joseph Jacks, the founder of Open Source Capital, said  that "IBM buying @RedHat is not about dominating the cloud. It is about becoming an OSS company. The largest proprietary software and tech companies in the world are now furiously rushing towards the future. An open future. An open source software driven future. OSS eats everything." https://twitter.com/asynchio/status/1056693588640194560   IBM is heavily invested in Linux - and RedHat isn't exactly thriving However, although open source might be the dominant mode of software in 2018, there are a few murmurs about it's sustainability and resilience. So, despite being central to just about everything we build and use when it comes to software, from a business perspective it isn't exactly thriving. Red Hat is a brilliant case in point. Despite being one of the first and most successful open source software businesses, providing free, open source software to customers in return for a support fee, revenues are down. Shares fell 14% in June following a disappointing financial forecast - and have fallen further since then. This piece in TechCrunch, almost 5 years old, does a good job of explaining the relative success of Red Hat, as well as its limitations: "When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat." From this perspective, this sets the stage for an organisation like IBM to come in and start investing in Red Hat as a foundational component of its future product and software strategy. Given that both organizations are heavily invested in Linux, this could be a really important relationship in supporting the project in the future. And although a multi-billion acquisition might not look like open source in action, it might also be one of the only ways that it's going to survive and thrive in the future. Thanks to Amarabha Banerjee, Aarthi Kumaraswamy, and Amey Varangaonkar for their help with this post. Update on 9th July, 2019 As pert the reports from The Fortune, IBM on Tuesday morning closed its $34 billion acquisition of Red Hat, which was announced last October. The pricey deal, which paid Red Hat owners a hefty premium of more than 60%, marks IBM CEO Ginni Rometty’s biggest bet yet in transforming her 108-year-old technology company. In an interview Tuesday morning, she said some tech analysts have assumed the move to the cloud would lead to a “winner take all” scenario, where one giant platform—Amazon Web Services?—ends up with all the business. Read the full story here.
Read more
  • 0
  • 0
  • 28050

article-image-we-are-not-going-to-withdraw-from-the-future-says-microsofts-brad-smith-on-the-ongoing-jedi-bid-amazon-concurs
Prasad Ramesh
29 Oct 2018
5 min read
Save for later

‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs

Prasad Ramesh
29 Oct 2018
5 min read
The Pentagon has been trying to get a hold of AI and related technologies from tech giants. Google employees had quit over it, Microsoft employees had asked the company to withdraw from the JEDI project. Last Friday, Microsoft President Brad Smith wrote about Microsoft and the US Military and the company’s visions in this area. Amazon, Microsoft, IBM, and Oracle are the companies who have bid for the Joint Enterprise Defense Infrastructure (JEDI) project. JEDI is a department wide cloud computing infrastructure that will give the Pentagon access to weapons systems enhanced with artificial intelligence and cloud computing. Microsoft believes in defending USA “We are not going to withdraw from the future, in the most positive way possible, we are going to work to help shape it.” said Brad Smith, President at Microsoft indicating that Microsoft intends to provide their technology to the Pentagon. Microsoft did not shy away from bidding in the Pentagon’s JEDI project. This in contrast to Google, which opted out of the same program earlier this month citing ethical concerns. Smith expressed Microsoft’s intent towards providing AI and related technologies to the US defense department saying, “we want the people who defend USA to have access to the nation’s best technology, including from Microsoft”. Smith stated that Microsoft’s work in this area is based on three convictions: Microsoft believes in the strong defense of USA and wants the defenders to have access to USA’s best technology, this includes Microsoft They want to use their ‘knowledge and voice’ to address ethical AI issues via the nation’s ‘civic and democratic processes’. They are giving their employees to opt out of work on these projects given that as a global company they consist of employees from different countries. Smith shared that Microsoft has had a long standing history with the US Department of Defense (DOD). Their tech has been used throughout the US military from the front office to field operations. This includes bases, ships, aircraft and training facilities. Amazon shares Microsoft’s visions Amazon too shares these visions with Microsoft in empowering US law and defense institutions with the latest technology. Amazon already provides cloud services to power the Central Intelligence Agency (CIA). Amazon CEO, Jeff Bezos said: “If big tech companies are going to turn their back on the Department of Defense, this country is going to be in trouble.” Amazon also provides the US law enforcement with their facial recognition technology called Rekognition. This has been a bone of contention for not just civil rights groups but also for some Amazon’s employees. Rekognition will help in identifying and incarcerating undesirable people. But it does not really work with accuracy. In a study by ACLU, Rekognition identified 28 people from the US congress incorrectly. The American Civil Liberties Union (ACLU) has now filed a Freedom of Information Act (FOIA) request which demands the Department of Homeland Security (DHS) to disclose how DHS and Immigration and Customs Enforcement (ICE) use Rekognition for law enforcement and immigration checks. Google’s rationale for withdrawing from the JEDI project Last week, in an interview with the Fox Network, Oracle founder Larry Ellison stated that it was shocking how Google viewed this matter. Google withdrew from the JEDI project following strong backlash from many of its employees. In the official statement, they have stated the reason for dropping out of the JEDI contract bidding as an ethical value misalignment and also that they don’t fully have all necessary clearance to work on Government projects.’ However, Google is open to launching a customized search engine in China that complies with China’s rules of censorship including potential to surveil Chinese citizens. Should AI be used in weapons? This question is the at the heart of the contentious topic of the tech industry working with the military. It is a serious topic that has been debated over the years by educated scientists and experienced leaders. Elon Musk, researchers from DeepMind and other companies even pledged to not build lethal AI. Personally, I side with the researchers and believe AI should be used exclusively for the benefit of mankind, to enhance human lives and solve problems that would prosper people’s lives. And not against each other in a race to build weapons or to become a superpower. But then again what would I know? Leading nations are in an AI arms race as we speak, with sophisticated national AI plans and agendas. For more details on Microsoft’s interest in working with the US Military visit the Microsoft website. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract
Read more
  • 0
  • 0
  • 13109

article-image-ibm-acquired-red-hat-for-34-billion-making-it-the-biggest-open-source-acquisition-ever
Sugandha Lahoti
29 Oct 2018
4 min read
Save for later

IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever

Sugandha Lahoti
29 Oct 2018
4 min read
In probably the biggest open source acquisition ever, IBM has acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion. However, if this deal is more of a business proposition than a community contributor is a question. Red Hat has been struggling on the market recently. Red Hat missed its most recent revenue estimates and its guidance fell below Wall Street targets. Prior to this deal, it had a market capitalization of about $20.5 billion. With this deal, Red Hat may soon take control of it’s sinking ship. It will also remain a distinct unit within IBM. The company will continue to be led by Jim Whitehurst, Red Hat’s CEO and Red Hat's current management team. Jim Whitehurst also will join IBM's senior management team and report to Ginni Rometty, IBM Chairman, President, and Chief Executive Officer. Why is Red Hat joining forces with IBM? In the announcement, Jim assured that IBM’s acquisition of Red Hat will help them accelerate without compromising their culture and policies. He said, "Open source is the default choice for modern IT solutions, and I'm incredibly proud of the role Red Hat has played in making that a reality in the enterprise.” He also added that, “Joining forces with IBM will provide us with a greater level of scale, resources, and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience--all while preserving our unique culture and unwavering commitment to open source innovation." What is IBM gaining from this acquisition? IBM believes this acquisition to be a game changer. "It changes everything about the cloud market," said Ginni, "IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses. IBM and Red Hat will accelerate hybrid multi-cloud adoption across all companies. They plan to together, “help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management.” "IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds," said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. "In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world." IBM assures that it will continue to build and enhance Red Hat partnerships with major cloud providers. It will also remain committed to Red Hat's open governance, open source contributions, participation in the open source community and development model. The company is keen on preserving the independence and neutrality of Red Hat's open source development culture and go-to-market strategy. The news was well received by the top Red Hat decision makers who embraced this with open arms. However, ZDNet reported that many RedHat employees were skeptical: "I can't imagine a bigger culture clash." "I'll be looking for a job with an open-source company." "As a Red Hat employee, almost everyone here would prefer it if we were bought out by Microsoft." People’s reactions on twitter on this acquisition are also varied: https://twitter.com/samerkamal/status/1056611186584604672 https://twitter.com/pnuojua/status/1056787520845955074 https://twitter.com/CloudStrategies/status/1056666824434020352 https://twitter.com/svenpet/status/1056646295002247169 Read more about the news on IBM’s newsroom. Red Hat infrastructure migration solution for proprietary and siloed infrastructure. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM Watson announces pre-trained AI tools to accelerate IoT operations
Read more
  • 0
  • 0
  • 28198

article-image-how-to-easily-access-a-windows-system-using-publicly-available-exploits-video
Savia Lobo
26 Oct 2018
2 min read
Save for later

How to easily access a Windows system using publicly available exploits [Video]

Savia Lobo
26 Oct 2018
2 min read
The recent ‘Activity Alert Report’ released by the NCCIC highlights that the majority of the exploits over the globe are mainly caused by publicly available tools. Read our article on the five most frequently used tools used by cybercriminals all over the globe to perform cyber crimes for more details. Exploiting a vulnerability in a software running on a machine can give access to the entire machine. The vulnerable application can be a service running in the OS or a web server or an SSH server. Any service that opens a port or is accessible in some other way can be targeted. Exploit development is an extremely time-consuming and complex process. Hence it is difficult to develop your own exploit. In most penetration tests, publicly available exploits are used. The working of the exploit depends on various factors like version number of the vulnerable system, the way it is configured and the OS used. In this video, Gergely Révay shows how to use public exploits to exploit a vulnerability in a software running on a windows 10 machine. Watch Gergely’s video below to learn how to use public exploits demonstrated with a practical example using exploit-db.com. https://www.youtube.com/watch?v=2YoYyWGFU6A About Gergely Révay Gergely Révay, the instructor of this course, is a penetration testing Senior Key Expert at Siemens Corporation, Germany. He has worked as a penetration tester since 2011. Before that, he was a quality assurance engineer in his home country, Hungary. As a consultant, he performed penetration tests and security assessments in various industries, such as insurance, banking, telco, mobility, healthcare, industrial control systems, and even car production. To know more about public exploits and to master various exploits and post exploitation techniques, check out Gergely’s course, ‘Practical Windows Penetration Testing [Video]’ jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits
Read more
  • 0
  • 0
  • 6202

article-image-sir-tim-berners-lee-on-digital-ethics-and-socio-technical-systems-at-icdppc-2018
Sugandha Lahoti
25 Oct 2018
4 min read
Save for later

Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018

Sugandha Lahoti
25 Oct 2018
4 min read
At the ongoing 40th ICDPPC, International Conference of Data Protection and Privacy Commissioners conference, Sir Tim Berners-Lee spoke on ethics and the Internet. The ICDPPC conference which is taking place in Brussels this week brings together an international audience on digital ethics, a topic the European Data Protection Supervisor initiated in 2015. Some high profile speakers and their presentations include Giovanni Buttarelli, European Data Protection Supervisor on ‘Choose Humanity: Putting Dignity back into Digital’; Video interview with Guido Raimondi, President of the European Court of Human Rights; Tim Cook, CEO Apple on personal data and user privacy; ‘What is Ethics?’ by Anita Allen, Professor of Law and Professor of Philosophy, University of Pennsylvania among others. Per Techcrunch, Tim Berners-Lee has urged tech industries and experts to pay continuous attention to the world their software is consuming as they go about connecting humanity through technology. “Ethics, like technology, is design. As we’re designing the system, we’re designing society. Ethical rules that we choose to put in that design [impact the society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.” he told the delegates present at the conference. He also described digital platforms as “socio-technical systems” — meaning “it’s not just about the technology when you click on the link it is about the motivation someone has, to make such a great thing and get excited just knowing that other people are reading the things that they have written”. “We must consciously decide on both of these, both the social side and the technical side,” he said. “The tech platforms are anthropogenic. They’re made by people. They’re coded by people. And the people who code them are constantly trying to figure out how to make them better.” According to Techcrunch, he also touched on the Cambridge Analytica data misuse scandal as an illustration of how sociotechnical systems are exploding simple notions of individual rights. “You data is being taken and mixed with that of millions of other people, billions of other people in fact, and then used to manipulate everybody. Privacy is not just about not wanting your own data to be exposed — it’s not just not wanting the pictures you took of yourself to be distributed publicly. But that is important too.” He also revealed new plans about his startup, Inrupt, which was launched last month to change the web for the better. His major goal with Inrupt is to decentralize the web and to get rid of gigantic tech monopolies’ (Facebook, Google, Amazon, etc) stronghold over user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. He explained that his platform can put people in control of their own data. The app, he explains, asks you where you want to put your data. So you can run your photo app or take pictures on your phone and say I want to store them on Dropbox, and I will store them on my own home computer. And it does this with a new technology which provides interoperability between any app and any store.” “The platform turns the privacy world upside down — or, I should say, it turns the privacy world right side up. You are in control of you data life… Wherever you store it you can control and get access to it.” He concluded saying that “We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.” The day before yesterday, The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence at ICDPPC. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 California’s tough net neutrality bill passes state assembly vote.
Read more
  • 0
  • 0
  • 14996

article-image-what-we-learnt-from-the-github-octoverse-2018-report
Amey Varangaonkar
24 Oct 2018
8 min read
Save for later

What we learnt from the GitHub Octoverse 2018 Report

Amey Varangaonkar
24 Oct 2018
8 min read
Highlighting key accomplishments over the last one year, Microsoft’s recent major acquisition GitHub released their yearly Octoverse report. The last 365 days have seen GitHub grow from strengths to strengths as the world’s leading source code management platform. The Octoverse report highlights how developers work and learn on GitHub. It also gives us some interesting, insights into the way the developers and even organizations are collaborating across geographies and time-zones, on a variety of interesting projects. The Octoverse report is based on the data collected from October 1 2017 to September 30, 2018, exactly 365 days from the publication of the last Octoverse report. In this article, we look at some of the key takeaways from the Octoverse 2018 report. Asia is home to GitHub’s fastest growing community GitHub developers who are currently based in Asia can feel proud of themselves. Octoverse 2018 states that more open source projects have been created in Asia than anywhere else in the world. While developers all over the world are joining and using GitHub, most new signups over the last year have come from countries such as China, India, and Japan. At the same time, GitHub usage is also growing quite rapidly in Asian countries such as Hong Kong, Singapore, Bangladesh, and Malaysia. This is quite interesting, considering the growth of AI has become part of the national policies in countries such as China, Hong Kong, and Japan. We can expect these trends to continue, and developing countries such as India and Bangladesh to contribute even more going forward. An ever-growing developer community squashes doubts on GitHub’s credibility When Microsoft announced their plans to buy GitHub in a deal worth $7.5 billion, many eyebrows were raised. Given Microsoft’s earlier stance against Open Source projects, some developers were skeptical of this move. They feared that Microsoft would exploit GitHub’s popularity and inject some kind of a subscription model into GitHub in order to recover the huge investment. Many even migrated their projects from GitHub on to rival platforms such as BitBucket and GitLab in protest. However, the numbers presented in the Octoverse report seem to suggest otherwise. According to the report, the number of new registrations last year alone was more than the number of registrations in the first 6 years of GitHub, which is quite impressive. The number of active contributors on GitHub has increased by more than 1.5 times over the last year, suggesting GitHub is still the undisputed leader when it comes to code management and collaboration. With more than 1.1 billion contributions across private and public projects over one year, I think we all know where major developers’ loyalty lies. Not just developers, organizations love GitHub too The Octoverse report states that 2.1 million organizations are using GitHub in some capacity, across public and private repositories. This number is a staggering 40% increase from 2017 - indicating the huge reliance on GitHub for effective code management and collaboration between the developers. Not just that, over 150,000 developers and organizations are using the apps and tools available on the GitHub marketplace for quick, efficient and seamless code development and management. GitHub had also launched a new feature called Security Alerts way back in November 2017. This feature alerted developers of any vulnerabilities in their project dependencies, and also suggested fixes for them from the community. Many organizations have found this feature to be an invaluable offering by GitHub, as it allowed for the development of secure, bug-free applications. Their faith in GitHub will be reinforced even more now that the report has revealed that over the last year, more than 5 million vulnerabilities were detected and communicated across to the developers. The report also suggests that members of an organization make substantial contributions to the projects and are twice as much active when they install and use the company app on GitHub. This suggests that GitHub offers them the best environment and the luxury to develop apps just as they want. All these insights only point towards one simple fact - Organizations and businesses trust GitHub. Microsoft are walking the talk with active open source contribution Microsoft joined the Linux Foundation after its initial (and vehement) opposition to the Open Source movement. With a change in leadership and the long-term vision came the realization that open source is essential for them - and the world - to progress. Eventually, they declared their support for the cause by going platinum with the Open Source initiative. That is now clearly being reflected in their achievements of the past year. Probably the most refreshing takeaway from the Octoverse report was to see Microsoft leading the pack when it comes to active open source contribution. The report states that Microsoft’s VSCode was the top open source project with 19,000 contributors. Also, it declared that the open source documentation of Azure was the fastest growing project on GitHub. Top open source projects on GitHub (Image courtesy: GitHub State of Octoverse 2018 Report) If this was not enough evidence to suggest Microsoft has amped up their claims of supporting the Open Source movement wholeheartedly, there’s more. Over 7000 Microsoft employees have contributed to various open source projects over the past one year, making it the top-most organization with the most Open Source contribution. Open source contribution by organization (Image source: GitHub State of Octoverse 2018 Report) When we said that Microsoft’s acquisition of GitHub was a good move, we were right! React Native and Machine Learning are red hot right now React Native has been touted to be the future of mobile development by many. This claim is corroborated by some strong activity on its GitHub repository over the last year. With over 10k contributors, React Native is one of the most active open source projects right now. With JavaScript continuing to rule the roost for the 5th straight year when it comes to being the top programming language, it comes as no surprise that the cross-platform framework for building native apps is now getting a lot of traction. Top languages over time (Image source: GitHub State of Octoverse 2018 Report) With the rise in popularity of Artificial Intelligence and specifically Machine Learning, the report also highlighted the continued rise of Tensorflow and PyTorch. While Tensorflow is the third most popular open source project right now with over 9000 contributors, Pytorch is one of the fastest growing projects on GitHub. The report also showed that Google and Facebook’s experimental frameworks for machine learning, called Dopamine and Detectron respectively are getting deserved attention thanks to how they are simplifying machine learning. Given the scale at which AI is being applied in the industry right now, these tools are expected to make developers’ lives easier going forward. Hence, it is not surprising to see their interest centered around these tools. GitHub’s Student Developer Pack to promote learning is a success According to the Octoverse report, over 1 million developers have honed their skills by learning best coding practices on GitHub. With over 600,000 active developer students learning how to write effective code through their Student Developer Pack, GitHub continue to give free access to the best development tools so that the students learn by doing and get valuable hands-on experience. In the academia, yet another fact that points to GitHub’s usefulness when it comes to learning is how teachers use the platform to implement real-world workflows for teaching. Over 20,000 teachers in over 18000 schools and universities have used GitHub to create over 200,000 assignments till date. Safe to say that this number is only going to grow in the near future. You can read more about how GitHub is promoting learning in their GitHub Education Classroom Report. GitHub’s competition has some serious catching up to do Since Google’s parent company Alphabet lost out to Microsoft in the race to buy GitHub, they have diverted their attention to GitHub’s competitor GitLab. Alphabet have even gone on to suggest that GitLab can surpass GitHub. According to the Octoverse report, Google are only behind Microsoft when it comes to the most open source contributions by any organization. With Gitlab joining forces with Google by moving their operations to Google Cloud Platform from Azure cloud, we might see Google’s contribution to GitHub reduce significantly over the next few years. Who knows, the next Octoverse report might not feature Google at all! That said, the size of the GitHub community, along with the volume of activity that happens on the platform on a per day basis - are both staggering and no other platforms come even close. This fact was supported by the enormity of some of the numbers that the report presented, such as: There are over 31 million developers on the platform till date. More than 96 million repositories are currently being hosted on GitHub There have been 65 million pull requests created in the last one year alone, contributing to almost 33% of the total number of pull requests created till date These numbers dwarf the other platforms such as GitLab, BitBucket and others, in comparison. Not only is GitHub the world’s most popular code collaboration and version control platform, it is currently the #1 choice of tool for most of the developers in the world. It will take some catching up for the likes of GitLab and others, to come even close to GitHub. In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey. Survey reveals how artificial intelligence is impacting developers across the tech landscape What the IEEE 2018 programming languages survey reveals to us
Read more
  • 0
  • 0
  • 19620
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-3-ways-to-break-your-rust-code-into-modules-video
Sugandha Lahoti
24 Oct 2018
3 min read
Save for later

3 ways to break your Rust code into modules [video]

Sugandha Lahoti
24 Oct 2018
3 min read
“Idiomatic coding means following the conventions of a given language. It is the most concise, convenient, and common way of accomplishing a task in that language, rather than forcing it to work in a way the author is familiar with from a different language.” - Adapted from Tim Mansfield Idiomatic rust code is beneficial for both the users of your code when you write and package it as libraries and also to build your own applications. One of the methods of writing elegant and concise rust code is to break up the code into modules. This clip is taken from the course Learning Rust by Leo Tindall. With this course, you will learn to write fast, low-level code in Rust. Breaking code helps in improving readability and discovery of code and documentation for both you and other contributors--if you are working on a project with multiple people. Breaking up codes is important because: The code can be functionally separate. People can figure out how the code base is structured without them going through the documentation. The best way to break up codes is by functional units. Each module should export a few symbols but lots of cross-coupling is a bad sign. Use module-per-struct If you have a lot of complex structs, it can be useful to make multiple sub-modules for each struct. This is also applicable to other implementations such as enums. All implementations for these structs should be in their module. The module root can then re-export them in a flat way. Avoid Cross-coupling Cross-coupling between modules and especially between levels is a ‘Code Smell’ or a symptom of bad design. You should use visibility modifiers to control access to implementations only where they are needed. Testing In-module For many architectures testing within each module is sufficient for unit testing. However, if necessary, depending on an organization, tests can be placed in a sub-module, generally in the same file. Watch the video to walk through each of the methods in detail. If you liked the video, don’t forget to check out the comprehensive course Learning Rust, packed with step-by-step instructions, working examples, and helpful tips and techniques on working with Rust. About the author Leo Tindall is a software developer and hacker from San Diego whose interests include scalability, parallel software, and machine learning. 9 reasons why Rust programmers love Rust Rust as a Game Programming Language: Is it any good? Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 11103

article-image-why-does-the-c-programming-language-refuse-to-die
Kunal Chaudhari
23 Oct 2018
8 min read
Save for later

Why does the C programming language refuse to die?

Kunal Chaudhari
23 Oct 2018
8 min read
As a technology research analyst, I try to keep up the pace with the changing world of technology. It seems like every single day, there is a new programming language, framework, or tool emerging out of nowhere. In order to keep up, I regularly have a peek at the listicles on TIOBE, PyPL, and Stackoverflow along with some twitter handles and popular blogs, which keeps my FOMO (fear of missing out) in check. So here I was, strolling through the TIOBE index, to see if a new programming language is making the rounds or if any old timer language is facing its doomsday in the lower half of the table. The first thing that caught my attention was Python, which interestingly broke into the top 3 for the first time since it was ranked by TIOBE. I never cared to look at Java, since it has been claiming the throne ever since it became popular. But with my pupils dilated, I saw something which I would have never expected, especially with the likes of Python, C#, Swift, and JavaScript around. There it was, the language which everyone seemed to have forgotten about, C, sitting at the second position, like an old tower among the modern skyscrapers in New York. A quick scroll down shocked me even more: C was only recently named the language of 2017 by TIOBE. The reason it won was because of its impressive yearly growth of 1.69% and its consistency - C has been featured in the top 3 list for almost four decades now. This result was in stark contrast to many news sources (including Packt’s own research) that regularly place languages like Python and JavaScript on top of their polls. But surely this was an indicator of something. Why would a language which is almost 50 years old still hold its ground against the ranks of newer programming language? C has a design philosophy for the ages A solution to the challenges of UNIX and Assembly The 70s was a historic decade for computing. Many notable inventions and developments, particularly in the area of networking, programming, and file systems, took place. UNIX was one such revolutionary milestone, but the biggest problem with UNIX was that it was programmed in Assembly language. Assembly was fine for machines, but difficult for humans. Watch now: Learn and Master C Programming For Absolute Beginners So, the team working on UNIX, namely Dennis Ritchie, Ken Thompson, and Brian Kernighan decided to develop a language which could understand data types and supported data structures. They wanted C to be as fast as the Assembly but with the features of a high-level language. And that’s how C came into existence, almost out of necessity. But the principles on which the C programming language was built were not coincidental. It compelled the programmers to write better code and strive for efficiency rather than being productive by providing a lot of abstractions. Let’s discuss some features which makes C a language to behold. Portability leads to true ubiquity When you try to search for the biggest feature of C, almost instantly, you are bombarded with articles on portability. Which makes you wonder what is it about portability that makes C relevant in the modern world of computing. Well, portability can be defined as the measure of how easily software can be transferred from one computer environment or architecture to another. One can also argue that portability is directly proportional to how flexible your software is. Applications or software developed using C are considered to be extremely flexible because you can find a C compiler for almost every possible platform available today. So if you develop your application by simply exercising some discipline to write portable code, you have yourself an application which virtually runs on every major platform. Programmer-driven memory management It is universally accepted that C is a high-performance language. The primary reason for this is that it works very close to the machine, almost like an Assembly language. But very few people realize that versatile features like explicit memory management makes C one of the better-performing languages out there. Memory management allows programmers to scale down a program to run with a small amount of memory. This feature was important in the early days because the computers or terminals as they used to call it, were not as powerful as they are today. But the advent of mobile devices and embedded systems has renewed the interest of programmers in C language because these mobile devices demand that the programmers keep memory requirement to a minimum. Many of the programming languages today provide functionalities like garbage collection that takes care of the memory allocation. But C calls programmers’ bluff by asking them to be very specific. This makes their programs and its memory efficient and inherently fast. Manual memory management makes C one of the most suitable languages for developing other programming languages. This is because even in a garbage collector someone has to take care of memory allocation - that infrastructure is provided by C. Structure is all I got As discussed before, Assembly was difficult to work with, particularly when dealing with large chunks of code. C has a structured approach in its design which allows the programmers to break down the program into multiple blocks of code for execution, often called as procedures or functions. There are, of course, multiple ways in which software development can be approached. Structural programming is one such approach that is effective when you need to break down a problem into its component pieces and then convert it into application code. Although it might not be quite as in vogue as object-oriented programming is today, this approach is well suited to tasks like database scripting or developing small programs with logical sequences to carry out specific set of tasks. As one of the best languages for structural programming, it’s easy to see how C has remained popular, especially in the context of embedded systems and kernel development. Applications that stand the test of time If Beyoncé would have been a programmer, she definitely might have sang “Who runs the world? C developers”. And she would have been right. If you’re using a digital alarm clock, a microwave, or a car with anti-lock brakes, chances are that they have been programmed using C. Though it was never developed specifically for embedded systems, C has become the defacto programming language for embedded developers, systems programmers, and kernel development. C: the backbone of our operating systems We already know that the world famous UNIX system was developed in C, but is it the only popular application that has been developed using C? You’ll be astonished to see the list of applications that follows: The world desktop operating market is dominated by three major operating systems: Windows, MAC, and Linux. The kernel of all these OSes has been developed using the C programming language. Similarly, Android, iOS, and Windows are some of the popular mobile operating systems whose kernels were developed in C. Just like UNIX, the development of Oracle Database began on Assembly and then switched to C. It’s still widely regarded as one of the best database systems in the world. Not only Oracle but MySQL and PostgreSQL have also been developed using C - the list goes on and on. What does the future hold for C? So far we discussed the high points of C programming, it’s design principle and the applications that were developed using it. But the bigger question to ask is, what its future might hold. The answer to this question is tricky, but there are several indicators which show positive signs. IoT is one such domain where the C programming language shines. Whether or not beginner programmers should learn C has been a topic of debate everywhere. The general consensus says that learning C is always a good thing, as it builds up your fundamental knowledge of programming and it looks good on the resume. But IoT provides another reason to learn C, due to the rapid growth in the IoT industry. We already saw the massive number of applications built on C and their codebase is still maintained in it. Switching to a different language means increased cost for the company. Since it is used by numerous enterprises across the globe the demand for C programmers is unlikely to vanish anytime soon. Read Next Rust as a Game Programming Language: Is it any good? Google releases Oboe, a C++ library to build high-performance Android audio apps Will Rust Replace C++?
Read more
  • 0
  • 0
  • 49866

article-image-mozilla-shares-plans-to-bring-desktop-applications-games-to-webassembly-and-make-deeper-inroads-for-the-future-web
Prasad Ramesh
23 Oct 2018
10 min read
Save for later

Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web

Prasad Ramesh
23 Oct 2018
10 min read
WebAssembly defines an Abstract Syntax Tree (AST) in a binary format and a corresponding assembly-like text format for executable code in Web pages. It can be considered as a new language or a web standard. You can create and debug code in plain text format. It appeared in browsers last year, but that was just a barebones version. Many new features are to be added that could transform what you can do with WebAssembly. The minimum viable product (MVP) WebAssembly started with Emscripten, a toolchain. It made C++ code run on the web by transpiling it to JavaScript. But the automatically generated JS was still significantly slower than the native code. Mozilla engineers found a type system hidden in the generated JS. They figured out how to make this JS run really fast, which is now called asm.js. This was not possible in JavaScript itself, and a new language was needed, designed specifically to be compiled to. Thus was born WebAssembly. Now we take a look at what was needed to get the MVP of WebAssembly running. Compile target: A language agnostic compile target to support more languages than just C and C++. Fast execution: The compiler target had to be designed fast in order to keep up with user expectations of smooth interactions. Compact: A compact compiler target to be able to fit and quickly load pages. Pages with large code bases of web apps/desktop apps ported to the web. Linear memory: A linear model is used to give access to specific parts of memory and nothing else. This is implemented using TypedArrays, similar to a JavaScript array except that it only contains bytes of memory. This was the MVP vision of WebAssembly. It allowed many different kinds of desktop applications to work on your browser without compromising on speed. Heavy desktop applications The next achievement is to run heavyweight desktop applications on the browser. Something like Photoshop or Visual Studio. There are already some implementations of this, Autodesk AutoCAD and Adobe Lightroom. Threading: To support the use of multiple cores of modern CPUs. A proposal for threading is almost done. SharedArrayBuffers, an important part of threading had to be turned off this year due to Spectre vulnerability, they will be turned on again. SIMD: Single instruction multiple data (SIMD) enables to take a chunk of memory and split it up across different execution units/cores. It is under active development. 64-bit addressing: 32-bit memory addresses only allow 4GB of linear memory to store addresses. 64-bit gives 16 exabytes of memory addresses. The approach to incorporate this will be similar to how x86 or ARM added support for 64-bit addressing. Streaming compilation: Streaming compilation is to compile a WebAssembly file while still being downloaded. This allows very fast compilation resulting in faster web applications. Implicit HTTP caching: The compiled code of a web page in a web application is stored in HTTP cache and is reused. So compiling is skipped for any page already visited by you. Other improvements: These are upcoming discussion on how to even better the load time. Once these features are implemented, even heavier apps can run on the browser. Small modules interoperating with JavaScript In addition to heavy applications and games, WebAssembly is also for regular web development. Sometimes, small modules in an app do a lot of the work. The intent is to make it easier to port these modules. This is already happening with heavy applications, but for widespread use a few more things need to be in place. Fast calls between JS and WebAssembly: Integrating a small module will need a lot of calls between JS and WebAssembly, the goal is to make these calls faster. In the MVP the calls weren’t fast. They are fast in Firefox, other browsers are also working on it. Fast and easy data exchange: With calling JS and WebAssembly frequently, data also needs to be passed between them. The challenge being WebAssembly only understand numbers, passing complex values is difficult currently. The object has to be converted into numbers, put in linear memory and pass WebAssembly the location in linear memory. There are many proposals underway. The most notable being the Rust ecosystem that has created tools to automate this. ESM integration: The WebAssembly module isn’t actually a part of the JS module graph. Currently, developers instantiate a WebAssembly module by using an imperative API. Ecmascript module integration is necessary to use import and export with JS. The proposals have made progress, work with other browser vendors is initiated. Toolchain integration: There needs to be a place to distribute and download the modules and the tools to bundle them. While there is no need for a seperate ecosystem, the tools do need to be integrated. There are tools like the wasm-pack to automatically run things. Backwards compatibility: To support older versions of browsers, even the versions that were present before WebAssembly came into picture. This is to help developers avoid writing another implementation for adding support to an old browser. There’s a wasm2js tool that takes a wasm file and outputs JS, it is not going to be as fast, but will work with older versions. The proposal for Small modules in WebAssembly is close to being complete, and on completion it will open up the path for work on the following areas. JS frameworks and compile-to-JS languages There are two use cases: To rewrite large parts of JavaScript frameworks in WebAssembly. Statically-typed compile-to-js languages being compiled to WebAssembly instead of JS For this to happen, WebAssembly needs to support high-level language features. Garbage collector: Integration with the browser’s garbage collector. The reason is to speed things up by working with components managed by the JS VM. Two proposals are underway, should be incorporated sometime next year. Exception handling: Better support for exception handling to handle the exceptions and actions from different languages. C#, C++, JS use exceptions extensively. It is under the R&D phase. Debugging: The same level of debugging support as JS and compile-to-JS languages. There is support in browser devtools, but is not ideal. A subgroup of the WebAssembly CG are working on it. Tail calls: Functional languages support this. It allows calling a new function without adding a new stack frame to the stack. There is a proposal underway. Once these are in place, JS frameworks and many compile-to-JS languages will be unlocked. Outside the browser This refers to everything that happens in systems/places other than your local machine. A really important part is the link, a very special kind of link. The special thing about this link is that people can link to pages without having to put them in any central registry, with no need of asking who the person is, etc. It is this ease of linking that formed global communities. However, there are two unaddressed problems. Problem #1: How does a website know what code to deliver to your machine depending on the OS device you are using? It is not practical to have different versions of code for every device possible. The website has only one code, the source code which is translated to the user’s machine. With portability, you can load code from unknown people while not knowing what kind of device are they using. This brings us to the second problem. Problem #2: If the people whose web pages you load are not known, there comes the question of trust. The code from a web page can contain malicious code. This is where security comes into picture. Security is implemented at the browser level and filters out malicious content if detected. This makes you think of WebAssembly as just another tool in the browser toolbox which it is. Node.js WebAssembly can bring full portability to Node.js. Node gives most of the portability of JavaScript on the web. There are cases where performance needs to be improved which can be done via Node’s native modules. These modules are written in languages such as C. If these native modules were written in WebAssembly, they wouldn’t need to be compiled specifically for the target architecture. Full portability in Node would mean the exact same Node app running across different kinds of devices without needing to compile. But this is not possible currently as WebAssembly does not have direct access to the system’s resources. Portable interface The Node core team would have to figure out the set of functions to be exposed and the API to use. It would be nice if this was something standard, not just specific to Node. If done right, the same API could be implemented for the web. There is a proposal called package name maps providing a mechanism to map a module name to a path to load the module from. This looks likely to happen and will unlock other use cases. Other use cases of outside the browser Now let’s look at the other use cases of outside the browser. CDNs, serverless, and edge computing The code to your website resides in a server maintained by a service provider. They maintain the server and make sure the code is close to all the users of your website. Why use WebAssembly in these cases? Code in a process doesn’t have boundaries. Functions have access to all memory in that process and they can call any functions. On running different services from different people, this is a problem. To make this work, a runtime needs to be created. It takes time and effort to do this. A common runtime that could be used across different use cases would speed up development. There is no standard runtime for this yet, however, some runtime projects are underway. Portable CLI tools There are efforts to get WebAssembly used in more traditional operating systems. When this happens, you can use things like portable CLI tools used across different kinds of operating systems. Internet of Things Smaller IoT devices like wearables etc are small and have resource constraints. They have small processors and less memory. What would help in this situation is a compiler like Cranelift and a runtime like wasmtime. Many of these devices are also different from one another, portability would address this issue. Clearly, the initial implementation of WebAssembly was indeed just an MVP and there are many more improvements underway to make it faster and better. Will WebAssembly succeed in dominating all forms of software development? For in depth information with diagrams, visit the Mozilla website. Ebiten 1.8, a 2D game library in Go, is here with experimental WebAssembly support and newly added APIs Testing WebAssembly modules with Jest [Tutorial] Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls
Read more
  • 0
  • 0
  • 26654

article-image-epics-public-voice-coalition-announces-universal-guidelines-for-artificial-intelligence-ugai-at-icdppc-2018
Natasha Mathur
23 Oct 2018
5 min read
Save for later

EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018

Natasha Mathur
23 Oct 2018
5 min read
The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence (UGAI), today. The UGAI were announced at the currently ongoing, 40th International Data Protection and Privacy Commissioners Conference (ICDPPC), in Brussels, Belgium, today. The ICDPPC is a worldwide forum where independent regulators from around the world come together to explore high-level recommendations regarding privacy, freedom, and protection of data. These recommendations are addressed to governments and international organizations. The 40th ICDPPC has speakers such as Tim Berners Lee (director of the world wide web), Tim Cook (Apple Inc, CEO), Giovanni Butarelli (European Data Protection Supervisor), and Jagdish Singh Khehar (44th Chief Justice of India) among others attending the conference. The UGAI combines the elements of human rights doctrine, data protection law, as well as ethical guidelines. “We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems”, reads the announcement page. The UGAI comprises twelve different principles for AI governance that haven’t been previously covered in similar policy frameworks. Let’s have a look at these principles in UGAI. Transparency principle Transparency principle puts emphasis on an individual’s right to interpret the basis of a particular AI decision concerning them. This means all individuals involved in a particular AI project should have access to the factors, the logic, and techniques that produced the outcome. Right to human determination The Right to human determination focuses on the fact that individuals and not machines should be responsible when it comes to automated decision-making. For instance, during the operation of an autonomous vehicle, it is impractical to include a human decision before the machine makes an automated decision. However, if an automated system fails, then this principle should be applied and human assessment of the outcome should be made to ensure accountability. Identification Obligation This principle establishes the foundation of AI accountability and makes the identity of an AI system and the institution responsible quite clear. This is because an AI system usually knows a lot about an individual. But, the individual might now even be aware of the operator of the AI system. Fairness Obligation The Fairness Obligation puts an emphasis on how the assessment of the objective outcomes of the AI system is not sufficient to evaluate an AI system. It is important for the institutions to ensure that AI systems do not reflect unfair bias or make any discriminatory decisions. Assessment and accountability Obligation This principle focuses on assessing an AI system based on factors such as its benefits, purpose, objectives, and the risks involved before and during its deployment. An AI system should be deployed only after this evaluation is complete. In case the assessment reveals substantial risks concerning Public Safety and Cybersecurity, then the AI system should not be deployed. This, in turn, ensures accountability. Accuracy, Reliability, and Validity Obligations This principle focuses on setting out the key responsibilities related to the outcome of automated decisions by an AI system. Institutions must ensure the accuracy, reliability, and validity of decisions made by their AI system. Data Quality Principle This puts an emphasis on the need for institutions to establish data provenance. It also includes assuring the quality and relevance of the data that is fed into the AI algorithms. Public Safety Obligation This principle ensures that institutions assess the public safety risks arising from AI systems that control different devices in the physical world. These institutions must implement the necessary safety controls within such AI systems. Cybersecurity Obligation This principle is a follow up to the Public Safety Obligation and ensures that institutions developing and deploying these AI systems take cybersecurity threats into account. Prohibition on Secret Profiling This principle states that no institution shall establish a secret profiling system. This is to ensure the possibility of independent accountability. Prohibition on Unitary Scoring This principle states that no national government shall maintain a general-purpose score on its citizens or residents. “A unitary score reflects not only a unitary profile but also a predetermined outcome across multiple domains of human activity,” reads the guideline page. Termination Obligation Termination Obligation states that an institution has an affirmative obligation to terminate the AI system built if human control of that system is no longer possible. For more information, check out the official UGAI documentation. The ethical dilemmas developers working on Artificial Intelligence products must consider Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Introducing Deon, a tool for data scientists to add an ethics checklist
Read more
  • 0
  • 0
  • 20490
article-image-following-linux-gnu-publishes-kind-communication-guidelines-to-benefit-members-of-disprivileged-demographics
Sugandha Lahoti
23 Oct 2018
5 min read
Save for later

Following Linux, GNU publishes ‘Kind Communication Guidelines’ to benefit members of ‘disprivileged’ demographics

Sugandha Lahoti
23 Oct 2018
5 min read
The GNU project published Kind Communication Guidelines, yesterday, to encourage contributors to be kinder in their communication to fellow contributors, especially to women and other members of disprivileged demographics. This news follows the recent changes in the Code of Conduct for the Linux community. Last month, Linux maintainers revised its Code of Conflict, moving instead to a Code of Conduct. The change was committed by Linus Torvalds, who shortly after the change took a  self-imposed leave from the project to work on his behavior. By switching to a Code of Conduct, Linux placed emphasis on how contributors and maintainers work together to cultivate an open and safe community that people want to be involved in. However, Linux’s move was not received well by many of its developers. Some even threatened to pull out their blocks of code important to the project to revolt against the change. The main concern was that the new CoC could be randomly or selectively used as a tool to punish or remove anyone from the community. Read the summary of developers views on the Code of Conduct that, according to them, justifies their decision. GNU is taking an approach different from Linux in evolving its community into a more welcoming place for everyone. As opposed to a stricter code of conduct, which enforces people to follow rules or suffer punishments, the Kind communication guidelines will guide people towards kinder communication rather than ordering people to be kind. What do Stallman’s ‘Kindness’ guidelines say? In a post, Richard Stallman, President of the Free Software Foundation, said “People are sometimes discouraged from participating in GNU development because of certain patterns of communication that strike them as unfriendly, unwelcoming, rejecting, or harsh. This discouragement particularly affects members of disprivileged demographics, but it is not limited to them.” He further adds, “Therefore, we ask all contributors to make a conscious effort, in GNU Project discussions, to communicate in ways that avoid that outcome—to avoid practices that will predictably and unnecessarily risk putting some contributors off.” Stallman encourages contributors to lead by example and apply the following guidelines in their communication: Do not give heavy-handed criticism Do not criticize people for wrongs that you only speculate they may have done. Try and understand their work. Please respond to what people actually said, not to exaggerations of their views. Your criticism will not be constructive if it is aimed at a target other than their real views. It is helpful to show contributors that being imperfect is normal and politely help them in fixing their problems. Reminders on problems should be gentle and not too frequent. Avoid discrimination based on demographics Treat other participants with respect, especially when you disagree with them. He requests people to identify and acknowledge people by the names they use and their gender identity. Avoid presuming and making comments on a person’s typical desires, capabilities or actions of some demographic group. These are off-topic in GNU Project discussions. Personal attacks are a big no-no Avoid making personal attacks or adopt a harsh tone for a person. Go out of your way to show that you are criticizing a statement, not a person. Vice versa, if someone attacks or offends your personal dignity, please don't “hit back” with another personal attack. “That tends to start a vicious circle of escalating verbal aggression. A private response, politely stating your feelings as feelings, and asking for peace, may calm things down.” Avoid arguing unceasingly for your preferred course of action when a decision for some other course has already been made. That tends to block the activity's progress. Avoid indulging in political debates Contributors are required to not raise unrelated political issues in GNU Project discussions. The only political positions that the GNU Project endorses are that users should have control of their own computing (for instance, through free software) and supporting basic human rights in computing. Stallman hopes that these guidelines, will encourage more contribution to GNU projects, and the subsequent discussions will be friendlier and reach conclusions more easily. Read the full guidelines on GNU blog. People’s reactions to GNU’s move has been mostly positive. https://twitter.com/MatthiasStrubel/status/1054406791088562177 https://twitter.com/0xUID/status/1054506057563824130 https://twitter.com/haverdal76/status/1054373846432673793 https://twitter.com/raptros_/status/1054415382063316993 Linus Torvalds and Richard Stallman have been the fathers of the open source movement since its inception over twenty years ago. As such, these moves underline that open source indeed has a toxic culture problem, but is evolving and sincerely working to make it more open and welcoming to all to easily contribute to projects. We’ll be watching this space closely to see which approach to inclusion works more effectively and if there are other approaches to making this transition smooth for everyone involved. Stack Overflow revamps its Code of Conduct to explain what ‘Be nice’ means – kindness, collaboration, and mutual respect. Linux drops Code of Conflict and adopts new Code of Conduct. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion  
Read more
  • 0
  • 0
  • 11252

article-image-exploring-shaders-and-materials-in-unity-2018-x-to-develop-scalable-mobile-games-video
Savia Lobo
23 Oct 2018
2 min read
Save for later

Exploring shaders and materials in Unity 2018.x to develop scalable mobile games [video]

Savia Lobo
23 Oct 2018
2 min read
Shaders are simple programs used for graphics and effects, generally designed to run on GPU. These are specialized instruction sets than programming languages like HLSL, CG, or GLSL. On the other hand, materials define how a surface should be rendered, by including references to the textures it uses, tiling information, color tints, and more. The available options for a material depend on which shader the material is using. Types of Shaders Surface Shaders: Surface Shaders in Unity is a code generation approach that makes it easier to write lit shaders than using low-level vertex/pixel shader programs. While these are widely used for modern games thanks to their robustness, they are expensive. Vertex shaders: These perform operations on each vertex, and are very fast. Fragment shaders: These shaders operate at the per-triangle level. In the following video, Raymundo Barrer outlines the basic difference and relationship between shaders and materials. He also explains the material-shader connection and what a simplified 3D rendering pipeline looks like. https://www.youtube.com/watch?v=MEp4asS9v_g&list=PLTgRMOcmRb3NeQU6M8muq7Qev8e8nMfsY&index=4 For more hands-on experience with relevant code samples for porting a game to mobile, adding downloadable content, and to track your game's performance, do visit Raymundo’s course titled Hands-On Unity 2018.x Game Development for Mobile [Video] About the author Raymundo Barrera is a software engineer who has spent the better part of the last decade working on various serious, entertainment, and educational projects in Unity. He is currently working in education tech as director of mobile engineering at a well-known education company. You can connect with him on LinkedIn or on his personal website. Getting started with ML agents in Unity [Tutorial] Working with shaders in C++ to create 3D games Multi-agents environments and adversarial self-play in Unity [Tutorial]
Read more
  • 0
  • 0
  • 11815

article-image-linux-4-19-kernel-releases-with-open-arms-and-aio-based-polling-interface-linus-back-to-managing-the-linux-kernel
Natasha Mathur
22 Oct 2018
4 min read
Save for later

Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel

Natasha Mathur
22 Oct 2018
4 min read
It was last month when Linus Torvalds took a break from kernel development. During his break, he had assigned Greg Kroah-Hartman as Linux's temporary leader, who went ahead and released the Linux 4.19 today at the ongoing Linux Foundation Open Source Summit in Edinburg, after eight release candidates. The new release includes features such as new AIO-based polling interface, L1TF vulnerability mitigations, the block I/O latency controller, time-based packet transmission, and the CAKE queuing discipline, among other minor changes. The Linux 4.19 kernel release announcement is slightly different and longer than usual as apart from mentioning major changes, it also talks about welcoming newcomers by helping them learn things with ease. “By providing a document in the kernel source tree that shows that all people, developers, and maintainers alike, will be treated with respect and dignity while working together, we help to create a more welcome community to those newcomers, which our very future depends on if we all wish to see this project succeed at its goals”, mentions Hartman. Moreover, Hartman also welcomed Linus back into the game as he wrote, “And with that, Linus, I'm handing the kernel tree back to you.  You can have the joy of dealing with the merge window”. Let’s discuss the features in Linux 4.19 Kernel. AIO-based polling interface A new polling API based on the asynchronous I/O (AIO) mechanism was posted by Christoph Hellwig, earlier this year.  AIO enables submission of I/O operations without waiting for their completion. Polling is a natural addition to AIO and point of polling is to avoid waiting for operations to get completed. Linux 4.19 kernel release comes with AIO poll operations that operate in the "one-shot" mode. So, once a poll notification gets generated, a new IOCB_CMD_POLL IOCB is submitted for that file descriptor. To provide support for AIO-based polling, two functions, namely,  poll() method in struct file_operations:  int (*poll) (struct file *file, struct poll_table_struct *table) (supports the polling system calls in previous kernels), are split into separate file_operations methods. Hence, it then adds these two new entries to that structure:    struct wait_queue_head *(*get_poll_head)(struct file *file, int mask);    int (*poll_mask) (struct file *file, int mask); L1 terminal fault vulnerability mitigations The Meltdown CPU vulnerability was first disclosed earlier this year and allowed unprivileged attackers to easily read the arbitrary memory in systems. Then, "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) was disclosed which brought back both threats, namely, easy attacks against host memory from inside a guest. Mitigations are available in Linux 4.19 kernel and have been merged into the mainline kernel. However, they can be expensive for some users. The block I/O latency controller Large data centers make use of control groups that help them balance the use of the available computing resources among competing users. Block I/O bandwidth can be considered .as one of the most important resources for specific types of workloads. However, kernel's I/O controller was not a complete solution to the problem. This is where block I/O latency controller comes into the picture. Linux 4.19 kernel has a block I/O latency controller now.  It regulates latency (instead of bandwidth) at a relatively low level in the block layer. When in use, each control group directory comprises an io.latency file that sets the parameters for that group. A line is written to that file following this pattern: major:minor target=target-time Here major and minor are used to identify the specific block device of interest. Target-time is the maximum latency that this group should be experiencing (in milliseconds). Time-based packet transmission The time-based packet transmission comes with a new socket option, and a new qdisc, which is designed so that it can buffer the packets until a configurable time before their deadline (tx times). Packets intended for timed transmission should be sent with sendmsg(), with a control-message header (of type SCM_TXTIME) which indicates the transmission deadline as a 64-bit nanoseconds value. CAKE queuing discipline “Common Applications Kept Enhanced" (CAKE) queuing discipline in Linux 4.19 exists between the higher-level protocol code and the network interface. It decides which packets need to be dispatched at any given time. It also comprises four different components that are designed to make things work on home links. It prevents the overfilling of buffers along with improving various aspects of networking performance such as bufferbloat reduction and queue management. For more information, check out the official announcement. The kernel community attempting to make Linux more secure KUnit: A new unit testing framework for Linux Kernel Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 18787
article-image-getting-started-with-qt-widgets-in-android-video
Sugandha Lahoti
22 Oct 2018
2 min read
Save for later

Getting started with Qt Widgets in Android [video]

Sugandha Lahoti
22 Oct 2018
2 min read
Qt is a powerful, cross-platform, graphics development framework. It provides a large set of consistent, standardized libraries and works on many major platforms, including embedded, mobile, desktop, and the web. Qt’s significant mobile offerings are aligned with the trend to go mobile. These include QtQuick, QML, Qt widgets in Android, and communicating between C++ and QML. In this video, Benjamin Hoff introduces us to the Qt widgets in Android. He talks about how to install Qt Android environment, set up Qt creator for Android deployment, and to build for release. This clip is taken from the course Mastering Qt 5 GUI Programming by Benjamin Hoff. With this course, you will master application development with Qt for Android, Windows, Linux, and web. Installing Qt Android environment Install Android SDK (Standard Development Kit) and Android NDK (Native Development Kit) Install Java SE Development Kit (JDK) or OpenJDK on Linux Compile Qt for the architecture you’re targeting, You can download it using the Qt online installer or install it using your package manager. Watch the video to walk through each of the methods in detail. If you liked the video, don’t forget to check out the comprehensive course Mastering Qt 5 GUI Programming, packed with step-by-step instructions, working examples, and helpful tips and techniques on working with Qt. About the author Benjamin Hoff is a Mechanical Engineer by education who has spent the first 3 years of his career doing graphics processing, desktop application development, and facility simulation using a mixture of C++ and python under the tutelage of a professional programmer. After rotating back into a mechanical engineering job, Benjamin has continued to develop software utilizing the skills he developed during his time as a professional programmer. Qt creator 4.8 beta released, adds language server protocol. WebAssembly comes to Qt. Now you can deploy your next Qt app in browser How to create multithreaded applications in Qt
Read more
  • 0
  • 0
  • 16153

article-image-how-to-avoid-nullpointerexceptions-in-kotlin
Sugandha Lahoti
20 Oct 2018
2 min read
Save for later

How to avoid NullPointerExceptions in Kotlin [Video]

Sugandha Lahoti
20 Oct 2018
2 min read
Kotlin has been rapidly growing in popularity in recent times. Some say it is even poised to take over Java, as the next universal programming language. This is because Kotlin is interoperable with Java. It is possible to write applications containing both Java and Kotlin code, calling one from the other. Secondly, while being interoperable, Kotlin code is far superior to Java code. Like Scala, Kotlin uses type inference to cut down on a lot of boilerplate code and makes it concise. However, unlike Scala, Kotlin code is easy to read and understand, even for someone who may not know Kotlin. Moreover, Kotlin is excellent in addressing the NullPointerException which causes a lot of checks in Java programs. In Kotlin, developers can avoid the dreaded NullPointerException by properly handling optional types. In this video, Nigel Henshaw shows how to avoid NullPointerExceptions in Kotlin. This clip is taken from the course Kotlin - Tips, Tricks, and Techniques by Nigel Henshaw. With this course, you will discover new possibilities with Kotlin and improve your app development process. How to avoid NullPointerExceptions? There are three ways to avoid NullPointerExceptions. These include: Use the Elvis operator for handling null values. The Elvis operator makes the code more concise and readable Use safe casts for avoiding ClassCastExceptions. Instead, ClassCastExceptions can be replaced with null. Using safe casts with Elvis operator. The Elvis operator can be combined with a safe cast for returning null values. Watch the video to walk through each of the methods using code examples. If you liked the video, don’t forget to check out the comprehensive course Kotlin - Tips, Tricks, and Techniques, packed with step-by-step instructions, working examples, and helpful tips and techniques on working with Kotlin. About the author Nigel Henshaw is a mobile software developer. He loves to share his knowledge through his YouTube channel and website. Nigel has worked in the UK, Scotland, and Japan. He has held jobs as a software engineer, consultant, project manager, and general manager of a remote development site. Implementing Concurrency with Kotlin [Tutorial] KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Kotlin 1.3 RC1 is here with compiler and IDE improvements
Read more
  • 0
  • 0
  • 20695
Modal Close icon
Modal Close icon