Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-pivotal-and-heroku-team-up-to-create-cloud-native-buildpacks-for-kubernetes-and-beyond
Natasha Mathur
04 Apr 2019
3 min read
Save for later

Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes

Natasha Mathur
04 Apr 2019
3 min read
Pivotal Inc., a software and services firm, announced yesterday that it has teamed up with Heroku to create Cloud Native Buildpacks for Kubernetes and beyond. Cloud Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. The new project is aimed at allowing developers to get more productive with Kubernetes. The Cloud Foundry Buildpacks team also released a selection of next-gen Cloud Foundry buildpacks that are compatible with the Cloud Native Buildpacks. This will allow users to try buildpacks out on Pivotal Container Service (PKS) and Pivotal Application Service (PAS). https://twitter.com/pivotalcf/status/1113426937685446657 “The project aims to deliver a consistent platform-to-buildpack contract for use in more places. The interface defined by this contract is informed by learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku” states the Pivotal team. With the new Cloud Native Buildpacks, you can create containers by just pushing the code without using any runtime dependencies. On “cf” pushing the custom code, buildpacks automatically add in the framework dependencies and create an application “droplet” that can be run on the platform. This droplet model allows Cloud Foundry to handle all the dependency updates. Application runtimes can also be updated by pulling in the latest buildpacks and rebuilding a droplet. Cloud Native Buildpacks expand on this idea and build an OCI (Open Container) image, capable of running on any platform.“We believe developers will love the simplicity of this single command to get a production quality container when they prefer not to author and maintain their own Dockerfile”, states the Pivotal team. Other reasons why Cloud Native Buildpacks are a step ahead than traditional buildpacks: Portability through OCI standard. Cloud Native Buildpacks can directly produce the OCI Images from source code. This makes Cloud Native Buildpacks much more portable, making them easy to use with  Kubernetes and Knative. Better modularity. Cloud Native Buildpacks are modular, offering platform operators more control over how developers can build their code during runtime. Speed. Cloud Native Buildpacks build faster because of advanced build caching, layer reuse, and data deduplication. Fast troubleshooting. Cloud Native Buildpacks helps troubleshoot production issues much faster as they can be used in a developer local environment. Reproducible builds. Cloud Native Buildpacks allow reproducible container image builds. What next? Pivotal team states that Cloud Native Buildpacks need some more work for it to be ready for enterprise scenarios. Pivotal is currently exploring adding three new features such as image promotion, operator control, and automated image patching. For image promotion, Pivotal is exploring a build service effective at image updating. This would allow the developers to promote images through environments, and cross PCF foundations. Also, Pivotal is exploring a declarative configuration model which will deliver new images to your registry whenever your configuration falls out of sync. “The best developers strive to eliminate toil from their lives. These engineers figure that if a task doesn’t add value, it should be automated..with Cloud Native Buildpacks, developers can happily remove.. toil from their jobs”, states Pivtol team. For more information, check out the official Pivotal Blog. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits.
Read more
  • 0
  • 0
  • 15496

article-image-microsoft-announces-net-standard-2-1
Prasad Ramesh
06 Nov 2018
3 min read
Save for later

Microsoft announces .NET standard 2.1

Prasad Ramesh
06 Nov 2018
3 min read
After a year of shipping .NET standard 2.0, Microsoft has now announced .NET standard 2.1 yesterday. In all, 3,000 APIs are planned to be included in .NET standard 2.1 and the progress on GitHub has reached 85% completion at the time of writing. The new features in .NET standard 2.1 are as follows. Span<T> in .NET standard 2.1 Span<T> has been added in .NET Core 2.1. It is an array-like type that allows representing managed and unmanaged memory in a uniform way. Span<T> is an important performance improvement since it allows managing buffers in a more efficient way. It supports slicing without copying and can help in reducing allocations and copying. Foundational-APIs working with spans Span<T> is available as a .NET Standard compatible NuGet package. This package does not help extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allowed working with spans. To add span to .NET Standard some companion APIs were added. Reflection emit added in .NET standard 2.1 In .NET Standard 2.1 Lightweight Code Generation (LCG) and Reflection Emit are added. Two new capability APIs are exposed to allow checking for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported). It is also supported if the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). SIMD There has been support for SIMD for a while now. They have been used to speed up basic operations like string comparisons in the BCL. There have been requests to expose these APIs in .NET Standard as the functionality requires runtime support. This cannot be provided meaningfully as a NuGet package. ValueTask and ValueTask<T> In .NET Core 2.1, the biggest feature was improvements to support high-performance scenarios. This also included making async/await more efficient. ValueTask<T> allows returning results if the operation completed synchronously without having to allocate a new Task<T>. In .NET Core 2.1 this has been improved which made it useful to have a corresponding non-generic ValueTask. This allows reducing allocations even for cases where the operation has to be completed asynchronously. This is a feature that types like Socket and NetworkStream now utilize. By exposing these APIs in .NET Standard 2.1, library authors now benefit from these improvements as a consumer as well as a producer. DbProviderFactories DbProviderFactories wasn’t available for .NET Standard 2.0, now it will be in 2.1. DbProviderFactories allows libraries and applications to make use of a specific ADO.NET provider without knowing any of its specific types at compile time. Other changes Many small features across the base class libraries have been added. These include System.HashCode for combining hash codes or new overloads on System.String. There are roughly 800 new members in .NET Core and all of them are added in .NET Standard 2.1. .NET Framework 4.8 will remain on .NET Standard 2.0. .NET Core 3.0 and the upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1. To ensure correct implementation of APIs, a review board is made to sign-off on API additions to the .NET Standard. The board chaired by Miguel de Icaza comprises of representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation. There will also be a formal approval process for new APIs. To know more, visit the Microsoft Blog. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 15478

article-image-python-serious-about-diversity-dumps-offensive-master-slave-terms-in-its-documentation
Natasha Mathur
13 Sep 2018
3 min read
Save for later

Python serious about diversity, dumps offensive ‘master’, ‘slave’ terms in its documentation

Natasha Mathur
13 Sep 2018
3 min read
Python is set on changing its “master” and “slave” terminology in its documentation and code. This is to conciliate the people claiming the terminology as offensive. A python developer at Red Hat, Victor Stinner, started a discussion titled “avoid master/slave terminology” on Python bug report, last week. The bug report discusses changing "master" and "slave" in Python documentation to terms such as "parent", "worker", or something similar, based on the complaints received “privately”. “For diversity reasons, it would be nice to try to avoid ‘master’ and ‘slave’ terminology which can be associated to slavery” mentioned Victor Stinner in the bug report. Not every Python developer who participated in this discussion agreed with Victor Stinner. One of the developers in the discussion, Larry Hastings, wrote “I'm a little surprised by this.  It's not like slavery was acceptable when these computer science terms were coined and it's only comparatively recently that they've gone out of fashion. On the other hand, there are some areas in computer software where "master" and "slave" are the exact technical terms (e.g. IDE), and avoiding them would lead to confusion”. Another Python developer, Terry J. Reedy wrote, “To me, there is nothing wrong with the word 'master', as such. I mastered Python to become a master of Python. Purging Python of 'master' seems ill-conceived. Like Larry, I object to action based on hidden evidence”. Python is not the only one who has been under Scrutiny. The Redis community, Django, and Drupal all faced the same issue. Drupal changed the terms "master" and "slave" for "primary" and "replica". Similarly, Django swapped "master" and "slave" for "leader" and "follower". To put an end to this debate about the use of this politically incorrect language, Guido Van Rossum, who resigned as “Benevolent dictator for life” or BDFL in July, but is still active as a core developer, was pulled back in. Guido ended the discussion by saying, “I'm closing this now. Three out of four of Victor's PRs have been merged. The fourth one should not be merged because it reflects the underlying terminology of UNIX ptys. There's a remaining quibble about "pliant children" -> "helpers" but that can be dealt with as a follow-up PR without keeping this discussion open”. The final commit on this is as follows: bpo-34605, pty: Avoid master/slave terms * pty.spawn(): rename master_read parameter to parent_read * Rename pty.slave_open() to pty.child_open(), but keep an pty.slave_open alis to pty.child_open for backward compatibility * os.openpty(), os.forkpty(): rename master_fd/slave_fd to parent_fd/child_fd * Rename internal variables: * Rename master_fd/slave_fd to parent_fd/child_fd * Rename slave_name to child_name For more information on the discussion, be sure to check out the official Python bug report. Why Guido van Rossum quit as the Python chief (BDFL) No new PEPS will be approved for Python in 2018, with BDFL election pending Python comes third in TIOBE popularity index for the first time
Read more
  • 0
  • 0
  • 15475

article-image-google-open-sources-sandboxed-api-a-tool-that-helps-in-automating-the-process-of-porting-existing-c-and-c-code
Amrata Joshi
19 Mar 2019
2 min read
Save for later

Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code

Amrata Joshi
19 Mar 2019
2 min read
Yesterday, the team at Google open-sourced Sandboxed API, a tool that Google has been using internally for its data centers for years. It is a project for sandboxing C and C++ libraries running on Linux systems. Google has made the Sandboxed API available on GitHub. Sandboxed API helps coders to automate the process of porting their existing C and C++ code in order to run on top of Sandbox2, which is Google's custom-made sandbox environment for Linux operating systems. Sandbox2 has also been open-sourced and is included with Sandboxed API GitHub repository. Christian Blichmann & Robert Swiecki, from Google's ISE Sandboxing team, said, "Many popular software containment tools might not sufficiently isolate the rest of the OS, and those which do, might require time-consuming redefinition of security boundaries for each and every project that should be sandboxed." The idea behind introducing sandboxing The idea behind sandboxing is to prevent bugs from spreading from one process to another, or the underlying operating system and the kernel. Many software projects process data that are externally generated and potentially could be untrusted. For instance, the conversion of user-provided picture files into different formats or executing user-generated software code. In case, a software library that parses such data is complex, then there is a high possibility that it might fall victim to certain types of security vulnerabilities such as memory corruption bugs or other problems related to the parsing logic. These vulnerabilities can have a serious impact on security. In order to overcome these challenges, developers prefer software isolation method known as sandboxing. With the help of sandboxing methods, developers make sure that only resources such as files, networking connections, and other operating system resources are accessible to the code involved in parsing user-generated content. The team plans to have an added support more operating systems and plans to bring Sandboxed API to the Unix-like systems like the BSDs (FreeBSD, OpenBSD) and macOS. Google also aims to bring CMake support to the API. To know more about this news in detail, check out Google’s blog post. Google to be the founding member of CDF (Continuous Delivery Foundation) Google announces the stable release of Android Jetpack Navigation #GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct
Read more
  • 0
  • 0
  • 15453

article-image-golang-1-11-is-here-with-modules-and-experimental-webassembly-port-among-other-updates
Natasha Mathur
27 Aug 2018
5 min read
Save for later

Golang 1.11 is here with modules and experimental WebAssembly port among other updates

Natasha Mathur
27 Aug 2018
5 min read
Golang 1.11 is here with modules and experimental WebAssembly port among other updates The Golang team released Golang 1.11 rc1 two weeks back, and now the much awaited Golang 1.11 is here. Golang 1.11, released last Friday, comes with changes and improvements to the toolchain, runtime, libraries, preliminary support for “modules”, and experimental port to WebAssembly. Golang is a modern programming language by Google, which was developed back in 2009 for application development. It’s simple syntax, concurrency support, and fast nature makes it one of the fastest growing languages in the software industry. Let’s now explore the new features in Golang 1.11. Ports Go 1.11 adds an experimental port to WebAssembly ( js/wasm ) along with other changes. Web Assembly Go 1.11 adds new GOOS value “js” and GOARCH value “wasm” to  WebAssembly. Go files named *_js.go or *_wasm.go will now be ignored by Go tools except for times when GOOS/GOARCH values are being used. The GOARCH name “wasm” is the official abbreviation of WebAssembly. The GOOS name “js” is due to the host environments like web browsers and Node.js, that executes the WebAssembly bytecode. Both of these host environments use JavaScript to embed WebAssembly. RISC-V GOARCH values reserved The main Go compiler does not provide support for the RISC-V architecture. Go 1.11 reserves the GOARCH values namely "riscv" and "riscv64" by Gccgo that supports RISC-V. This means that Go files named *_riscv.go will be ignored by Go tools except for cases when those GOOS/GOARCH values are being used. Other changes Go 1.11 now needs OpenBSD 6.2 or later, macOS 10.10 Yosemite or later, or Windows 7 or later. Any support for previous versions of these operating systems have been deprecated. It also offers support for the upcoming OpenBSD 6.4 release. With changes in the OpenBSD kernel, you won’t be able to run older versions of Go on OpenBSD 6.4. With Go 1.11, the new environment variable settings have been added for 64-bit MIPS systems, namely, GOMIPS64=hardfloat (the default) and GOMIPS64=softfloat. These enable you to decide whether to use hardware instructions or software emulation for floating-point computations. Go now uses a more efficient software floating point interface on soft float ARM systems (GOARM = 5). There is no need of a linux kernel configured with KUSER_HELPERS now on ARMv7. Toolchain There are also fixes made in Modules, packages, and debugging in Golang 1.11. Modules There’s now preliminary support added for a new experimental concept called “modules”, in Golang 1.11. This is an alternative to GOPATH with integrated support for versioning and package distribution. With the help of modules, developers are no longer limited to working inside GOPATH. Package loading There’s a new package, the golang.org/x/tools/go/packages that offers a simple API for locating and loading Go source code packages. It’s not yet part of the standard library but it effectively replaces the go/build package for many tasks. Build cache requirement Go 1.11 will be the last release which offers support for setting the environment variable GOCACHE=off ( to disable the build cache ), that was introduced in Go 1.10. The compiler toolchain now offers support for column information in line directives. Improved debugging The compiler in Go 1.11 now offers improved debugging for optimized binaries which includes variable location information, line numbers, and breakpoint locations. This makes it possible to debug binaries compiled without -N -l. There’s also experimental support added for calling Go functions from within a debugger. Compiler Toolchain Golang 1.11 offers support for column information in line directives. Also, a new package export data format is introduced which is transparent to end users, except for speeding up build times for large Go projects. Runtime Runtime in Go 1.11 now makes use of a sparse heap layout. This ensures that there is no longer a limit to the size of the Go heap as the limit was 512GiB earlier. It also provides fixing of rare "address space conflict" failures in mixed Go/C binaries or binaries compiled with -race. Library changes There are various minor updates and changes to the core library in Golang 1.11. Crypto: Crypto operations such as ecdsa.Sign, rsa.EncryptPKCS1v15 and rsa.GenerateKey, now randomly read an extra byte to ensure that tests don't rely on internal behavior. debug/elf: Constants such as ELFOSABI and EM have been added. encoding/asn1: There is now support for "private" class annotations for fields in Marshal and Unmarshal. image/gif: There is support for non-looping animated GIFs. They are denoted by having a LoopCount of -1. math/big: With Golang 1.11, ModInverse now returns nil when g and n are not relatively prime. Apart from these major updates, there are many other changes in Golang 1.11. To get more information, be sure to check the official Golang 1.11 release notes. Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial] GoMobile: GoLang’s Foray into the Mobile World  
Read more
  • 0
  • 0
  • 15441

article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 15421
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 15402

article-image-red-hat-announces-the-general-availability-of-red-hat-openshift-service-mesh
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Red Hat announces the general availability of Red Hat OpenShift Service Mesh

Amrata Joshi
27 Aug 2019
3 min read
Last week, the team at Red Hat, a provider of enterprise open source solutions announced the general availability of Red Hat OpenShift Service Mesh for connecting, managing, observing and simplifying service-to-service communication of Kubernetes applications on Red Hat OpenShift 4.  The OpenShift Service Mesh is based on Istio, Kiali and Jaeger projects and is designed to deliver end-to-end developer experience around microservices-based application architectures. It manages the network connections between the containerized applications and decentralized applications. And eases the complex tasks of implementing bespoke networking services for applications and business logic.  Larry Carvalho, research director, IDC said in a statement to Business Wire, “Service mesh is the next big area of disruption for containers in the enterprise because of the complexity and scale of managing interactions with interconnected microservices. Developers seeking to leverage Service Mesh to accelerate refactoring applications using microservices will find Red Hat’s experience in hybrid cloud and Kubernetes a reliable partner with the Service Mesh solution." Developers can now improve the implementation of microservice architectures by natively integrating service mesh into the OpenShift Kubernetes platform. The OpenShift Service Mesh improves traffic management by including service observability and visualization of the mesh topology.  Ashesh Badani, Red Hat's senior VP of Cloud Platforms, said in a statement, "The addition of Red Hat OpenShift Service Mesh allows us to further enable developers to be more productive on the industry's most comprehensive enterprise Kubernetes platform by helping to remove the burdens of network connectivity and management from their jobs and allowing them to focus on building the next-generation of business applications." Features of Red Hat OpenShift Service Mesh Tracing OpenShift Service Mesh features tracing that uses Jaeger which is an open, distributed tracing system. Tracing helps developers in tracking a request between services. Tracing also helps in providing an insight into the request process right from the start till the end.  Visualization and observability  This Service Mesh also provides an easier way to view its topology and observe how the services interact. Visualization helps in understanding how the services are managed and how the traffic is flowing in near-real time which helps in easier management and troubleshooting. Service Mesh installation and configuration  OpenShift Service Mesh features “One-click” Service Mesh installation and configuration with the help of Service Mesh Operator and an Operator Lifecycle Management framework. The developers can deploy applications into a service mesh more easily. A Service Mesh Operator deploys Istio, Jaeger and Kiali together minimizes management burdens and automates tasks such as installation, service maintenance as well as lifecycle management. Developed with open projects OpenShift Service Mesh is developed with open projects and is built in collaboration with leading members of the Kubernetes community. Increases productivity of the developers The Service Mesh integrates communication policies without making changes to the application code or integrating language-specific libraries. To know more about Red Hat OpenShift Service Mesh, check out the official website. Red Hat joins the RISC-V foundation as a Silver level member Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman
Read more
  • 0
  • 0
  • 15379

article-image-eclipse-ides-photon-release-will-support-rust
Pavan Ramchandani
29 Jun 2018
2 min read
Save for later

Eclipse IDE’s Photon release will support Rust

Pavan Ramchandani
29 Jun 2018
2 min read
Eclipse Foundation announced the release of Photon release of Eclipse IDE. Also with this release, the community announced the support for Rust language. This support will give a native Eclipse IDE working experience for Rust developers. Eclipse IDE has been known for providing the IDE support and the learning demands for the Rust community. This release marks the thirteenth annual simultaneous release of Eclipse. The important features in the Photon release as follows: Full Eclipse IDE support for building, debugging, running, and packaging Rust applications and giving a good user experience for Rust development. More support for C# for editing and debugging codes, this includes syntax coloring, autocomplete suggestions, diagnostics, and navigation. The Photon release has added some more frameworks to the IDE such as RedDeer (framework for building automated test), Yasson (Java framework for providing binding with JSON documents), JGit (Git for Java), among others. It also comes with some more updates and features for dynamic language toolkit, Eclipse Modeling Framework (EMF), PHP development tools, C/C++ development tools, tools for Cloud Foundry, dark theme and improvement in background color and popup dialogs. Eclipse foundation has also introduced, what they called Language Server Protocol (LSP), with the Photon release. WIth the LSP based release, Eclipse will deliver support for popular and emerging languages in the IDE. With the normal release cycle, LSP will focus on keeping pace with the emerging tools and technologies andon the developers and their commercial needs in their future releases. For more information on the Photon project and contributing to the Eclipse community, you can check out the Eclipse Meetup event. Read more What can you expect from the upcoming Java 11 JDK? Perform Advanced Programming with Rust The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 15370

article-image-facebook-is-the-new-cigarettes-says-marc-benioff-salesforce-co-ceo-2
Kunal Chaudhari
02 Oct 2018
6 min read
Save for later

“Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO

Kunal Chaudhari
02 Oct 2018
6 min read
So, it was that time of the year when Salesforce enthusiasts, thought-leaders, and pioneers gathered around downtown San-Francisco, to attend the annual Dreamforce conference last week. This year marked the 15th anniversary of the Salesforce annual conference with over 100,000 trailblazers flocking towards the bay area. Throughout these years, technological development in the platform has been the focal point of these conferences, but it was different this time around. A lot has happened between the conference that took place in 2017 and now, especially after Facebook’s Cambridge analytica scandal. First Whatsapp’s co-founder Jan Koum parted ways with Facebook, and now the Instagram co-founders have called it quits. Interestingly, Marc Benioff gave an interview to Bloomberg Technology in which he condemned Facebook as the ‘new cigarettes’. To regulate or not to regulate, that is the question Marc Benioff has been a vocal criticizer of the social media platform. Earlier this year, when innovators and tech leaders gathered at the annual World Economic Forum in the Swiss Alps of Davos, Benioff was one of the panelist discussing on the factors of trust in technology where he made certain interesting points. He took the examples of financial industry a decade ago, where bankers were pretty confident that new products like credit default swaps (CDS), and collateralized debt obligation (CDO) would lead to better economic growth but instead it lead to the biggest financial crisis the world had ever seen. Similarly, he argued that Cigarettes were introduced as this great product for pass time, without any background on its adverse effects on health. Well to the cut the story short, the point that Benioff was trying to make is that these industries were able to take advantage of the addictive behavior of humans because of the clear lack of regulation from the governmental bodies. It was only when the regulators became strict towards these sectors and public reforms came into the picture, these products were brought under control. Similarly, Benioff had called for a regulation of companies, on behalf of the recent news linking the Russian interference in the US presidential elections. He urged the CEO’s of companies to take better responsibilities towards their consumers and their products without explicitly mentioning any name. Let’s take a guess, Mark Zuckerberg anyone? While Benioff made a strong case for regulation, the solution seemed to be more politically driven. Rachel Botsman, Visiting Academic and Lecturer at the Saïd Business School, University of Oxford, argued that regulators are not aware of the new decentralized nature of today’s technological platforms. And ultimately who do we want as the arbiters of truth, should it be Facebook, Regulators, or the Users? and where does the hierarchy of accountability lie in this new structure of platforms? The big question remains. The ethical and humane side of technology Fast forward to Dreamforce 2018, with star-studded guest speakers ranging from the former American Vice President Al Gore to Andre Iguodala of the NBA’s Golden State Warriors. Benioff started with his usual opening keynote but this time with a lot of enthusiasm, or as one might say in a full evangelical mode, the message from the Salesforce CEO was very clear, “We are in the fourth industrial revolution”. Salesforce announced plenty of new products and some key strategic business partnerships with the likes of Apple and AWS now joining Salesforce. While these announcements summarized the technological advancements in the platform, his interview with Bloomberg Technology’s Emily Chang was quite opportunistic. The interview started casually with talks of Benioff sharing his job with the new Co-CEO Keith Block. But soon they discussed the news about Instagram founders Kevin Systrom and Mike Krieger leaving the services of parent company Facebook. While Benioff still maintained his position on regulation, he also discussed about the ethics and humane side of technology. The ethics of technology has come under the spotlight in the recent months with the advancements in Artificial intelligence. In order to solve these questions, Benioff said that Salesforce has taken its first step by setting up the “Office of Ethical and Humane Use of Technology” at the Salesforce Tower in San Francisco. At first, this initiative looks like a solid first step towards solving the problem of technology being used for unethical work. But going back to the argument posed by Rachel Botsman, who actually leverages technology to do unethical work? Is it the Company or the consumer? While Salesforce boasts about its stand on the ethics of building a technological system, Marc Benioff is still silent on the question of Salesforce’s ties with the US Customs and Border Protection (CBP) agency, which follows Donald Trump’s strong anti-immigration agenda. Protesters took a stand against this issue during the Salesforce conference and hundreds of employees from Salesforce wrote an open letter to Benioff to cut ties with the CBP. In return, Benioff responded that its contract with CBP does not deal directly with the separation of children at the Mexican borders. One decision at a time Ethics is largely driven by human behavior, while innovators believe that technological advancements should happen regardless of the outcome, it is the responsibility of every stakeholder in the company, be it a developer, an executive, or a customer to take action against unethical work. And with each mistake, companies and CEOs are provided with opportunities to set things right. Take McKinsey & Company for example. The top management consultancy was under fire due to its scandal in the South African government. But when the firm again came under scrutiny with its ties with the CBP of USA, McKinsey’s new managing partner, Kevin Sneader, came out saying that the firm “will not, under any circumstances, engage in any work, anywhere in the world, that advances or assists policies that are at odds with our values.” It’s now time for companies like Facebook and Salesforce to set the benchmark for the future of technology. How far will Facebook go to fix what it broke: Democracy, Trust, Reality SAP creates AI ethics guidelines and forms an advisory panel The Cambridge Analytica scandal and ethics in data science Introducing Deon, a tool for data scientists to add an ethics checklist The ethical dilemmas developers working on Artificial Intelligence products must consider Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 15344
article-image-pull-panda-is-now-a-part-of-github-code-review-workflows-now-get-better
Amrata Joshi
18 Jun 2019
4 min read
Save for later

Pull Panda is now a part of GitHub; code review workflows now get better!

Amrata Joshi
18 Jun 2019
4 min read
Yesterday, the team at GitHub announced that they have acquired Pull Panda for an undisclosed amount, to help teams create more efficient and effective code review workflows on GitHub. https://twitter.com/natfriedman/status/1140666428745342976 Pull Panda helps thousands of teams to work together on the code and further helps in improving their process by combining three new apps including Pull Reminders, Pull Analytics, and Pull Assigner. Pull Reminders: Users can get a prompt in Slack whenever a collaborator needs a review. It facilitates automatic reminders that ensures the pull requests aren’t missed. Pull Analytics: Users can now get real-time insight and make data-driven improvements for creating a more transparent and accountable culture. Pull Assigner: Users can automatically distribute code across their team such that no one gets overloaded and knowledge could be spread around. Pull Panda helps the team to ship faster and gain insight into bottlenecks in the process. Abi Noda, the founder of Pull Panda highlighted the major reasons for starting Pull Panda. According to him, there were two major pain points, the first one was that on fast moving teams, usually pull requests are forgotten which causes delays in the code reviews and eventually delays in shipping new features to the customers. Abi Noda stated in a video, “I started Pull Panda to solve two major pain points that I had as an engineer and manager at several different companies. The first problem was that on fast moving teams, pull requests easily are forgotten about and often slip through the cracks. This leads to frustrating delays in code reviews and also means it takes longer to actually ship new features to your customers.” https://youtu.be/RtZdbZiPeK8 The team built Pull Reminders which is a GitHub app that automatically notifies the team about their code reviews, to solve the above mentioned problem. The second problem was that it was difficult to measure and understand the team's development process for identifying bottlenecks. To solve this issue, the team built Pull Analytics to provide real-time insights into the software development process. It also highlights the current code review workload across the team such that the team knows who is overloaded and who might be available. Also, a lot of customers have discovered that the majority of their code reviews were done by the same set of people on the team. For solving this problem,  the team built Pull Assigner that offers two algorithms for automatically assigning reviewers. First is the Load Balance, which equalizes the number of reviews so everyone on the team does the same number of reviews. The second one is the round robin algorithm that randomly assigns additional reviewers such that knowledge can be spread across the team. Nat Friedman, CEO at GitHub said, “We'll be integrating everything Abi showed you directly into GitHub over the coming months. But if you're impatient, and you want to get started now, I'm happy to announce that all three of the Pull Panda products are available for free in the GitHub marketplace starting today. So we hope you enjoy using Pull Panda and we look forward to your feedback. Goodbye. It's over.” Pull Panda will no longer offer the Enterprise plan. Existing customers of Enterprise plans can continue to use their on-premises offering. All paid subscriptions have been converted to free subscriptions. New users can install Pull Panda for their organizations for free at our website or GitHub Marketplace. The official GitHub blog post reads, “We plan to integrate these features into GitHub but hope you’ll start benefiting from them right away. We’d love to hear what you think as we continue to improve how developers work together on GitHub.” To know more about this news, check out GitHub’s post. GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise          
Read more
  • 0
  • 0
  • 15325

article-image-whats-new-in-visual-studio-1-22
Amarabha Banerjee
10 Apr 2018
3 min read
Save for later

What’s new in Visual Studio 1.22

Amarabha Banerjee
10 Apr 2018
3 min read
Microsoft has released the latest version of Visual Studio 1.22 recently with a few additions and improvements to it. The primary feature that Microsoft has introduced is called “Logpoints”. The idea of Logpoints is very literal - i.e. these are the breakpoints while debugging code and while taking note of these breakpoints, the developers need not stop code execution and can keep a track of events. The primary changes are: Syntax Aware code folding: This feature allows better code folding for CSS, HTML, JSON and Markdown files. This feature ensures that the code folding is not based on indentation but based on code syntax and hence makes the code much more readable and developer friendly. Conversion to ES6 Refactoring: How many times have you thought that a little bit of help while coding would have made your coding experience better? Visual Studio has added this feature in their new release. The code suggest button (an elliptical hover button) will suggest latest ES 6 code snippets and the developers will have the choice to accept it or modify it. A welcome feature for the new and mid-level programmers for sure. Auto Attach to process: This feature provides a lot of help for the Node.js developers. It automatically starts debugging node.js programs and applications the moment you launch them, eliminating the need for a dedicated launcher program. The other important features of the new version are: Cross file error, warning and reference navigation: This helps you to navigate through the different workspaces efficiently. Improved Large File support: This enables faster syntax highlighting and helps in better and larger memory allocation for bigger applications making the overall debugging process faster. Multi-Line links in the terminal: This feature allows developers to hyperlink multiple links spanning across several lines in the editor. Better organization of JavaScript/TypeScript imports: This feature helps programmers to remove unused codes and sort their imports in a more orderly manner. Emmet Wrap preview: This feature provides the live preview for Emmet's wrap with abbreviation functionality. With these new and exciting features, Visual Studio surely is moving towards a more user-friendly and predictive coding platform for the programmers. We will keep a close watch on its future release and share updates on how these releases target better code reusability, importing different codes easily and better-debugging functionality. Read about the full update on the official Visual Studio website. C++, SFML, Visual Studio, and Starting the first game  
Read more
  • 0
  • 0
  • 15305

article-image-a-vulnerability-discovered-in-kubernetes-kubectl-cp-command-can-allow-malicious-directory-traversal-attack-on-a-targeted-system
Amrata Joshi
25 Jun 2019
3 min read
Save for later

A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system

Amrata Joshi
25 Jun 2019
3 min read
Last week, the Kubernetes team announced that a security issue (CVE-2019-11246) was discovered with Kubernetes kubectl cp command. According to the team this issue could lead to a directory traversal in such a way that a malicious container could replace or create files on a user’s workstation.  This vulnerability impacts kubectl, the command line interface that is used to run commands against Kubernetes clusters. The vulnerability was discovered by Charles Holmes, from Atredis Partners as part of the ongoing Kubernetes security audit sponsored by CNCF (Cloud Native Computing Foundation). This particular issue is a client-side defect and it requires user interaction to exploit the system. According to the post, this issue is of high severity and  the Kubernetes team encourages to upgrade kubectl to Kubernetes 1.12.9, 1.13.6, and 1.14.2 or later versions for fixing this issue. To upgrade the system, users need to follow the installation instructions from the docs. The announcement reads, “Thanks to Maciej Szulik for the fix, to Tim Allclair for the test cases and fix review, and to the patch release managers for including the fix in their releases.” The kubectl cp command allows copying the files between containers and user machine. For copying files from a container, Kubernetes runs tar inside the container for creating a tar archive and then copies it over the network, post which, kubectl unpacks it on the user’s machine. In case, the tar binary in the container is malicious, it could possibly run any code and generate unexpected, malicious results. An attacker could use this to write files to any path on the user’s machine when kubectl cp is called, which is limited only by the system permissions of the local user. The current vulnerability is quite similar to CVE-2019-1002101 which was an issue in the kubectl binary, precisely in the kubectl cp command. The attacker could exploit this vulnerability for writing files to any path on the user’s machine. Wei Lien Dang, co-founder and vice president of product at StackRox, said, “This vulnerability stems from incomplete fixes for a previously disclosed vulnerability (CVE-2019-1002101). This vulnerability is concerning because it would allow an attacker to overwrite sensitive file paths or add files that are malicious programs, which could then be leveraged to compromise significant portions of Kubernetes environments.” Users are advised to run kubectl version --client and in case it does not say client version 1.12.9, 1.13.6, or 1.14.2 or newer, then it means the user is running a vulnerable version which needs to be upgraded. To know more about this news, check out the announcement.  Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!    
Read more
  • 0
  • 0
  • 15287
article-image-github-down-for-over-7-hours-due-to-failure-in-its-data-storage-system
Natasha Mathur
22 Oct 2018
3 min read
Save for later

GitHub down for a complete day due to failure in its data storage system

Natasha Mathur
22 Oct 2018
3 min read
Update, 23rd October 2018: As of Monday at 23:00 UTC, all the GitHub services returned back to normal. The GitHub team posted an update on their blog mentioning, "we take reliability very seriously and sincerely apologize for this disruption. Millions of people and businesses depend on GitHub, and we know that our community feels the effects of our availability issues acutely. We are conducting a thorough and transparent root cause analysis and mitigation plan, which will be published in the coming days".  Github is facing issues due to a failure in its data storage system which left the site broken for a complete day. The outage started at about 23:00 UTC on Sunday. GitHub engineers are working on fixing this issue and the GitHub team tweeted out about 2 hours ago saying: https://twitter.com/githubstatus/status/1054224055673462786 What’s confusing about this outage is that there's no obvious way to tell the site is down as the website’s backend git services are still up and running. However, users are facing a range of issues such as not being able to log in, outdated files being served, branches missing, unable to submit Gists, bug reports, posts, etc. The team updated their status to “We continue working to repair a data storage system for GitHub.com. You may see inconsistent results during this process”.  The GitHub team further updated the users, "During this time, information displayed on GitHub.com is likely to appear out of date; however no data was lost. Once service is fully restored, everything should appear as expected. Further, this incident only impacted website metadata stored in our MySQL databases, such as issues and pull requests. Git repository data remains unaffected and has been available throughout the incident". The team also mentioned that it will continue to update the users and will provide an estimated time to resolution via their status page. GitHub is a very popular web-based hosting service for software development projects that use the Git revision control system. It is used extensively by software engineers, developers and open source projects all around the world. Since a major chunk of people’s daily work depends on GitHub, developers are venting their frustration over social media sites. https://twitter.com/AmeliasBrain/status/1054149648108085248 https://twitter.com/michaelansaldi/status/1054175097609732096 https://twitter.com/sajaraki/status/1054189413616373761 GitHub is also used by major corporates such as Twitter, Yelp, Adobe and others to host their open source projects. There haven’t been any further updates from the GitHub team and we can only wait to know the real problem behind the outage. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligence
Read more
  • 0
  • 0
  • 15238

article-image-python-3-7-2rc1-and-3-6-8rc1-released
Natasha Mathur
12 Dec 2018
2 min read
Save for later

Python 3.7.2rc1 and 3.6.8rc1 released

Natasha Mathur
12 Dec 2018
2 min read
Python team released Python versions 3.7.2rc1 and 3.6.9 rc1 yesterday.  Python 3.7.2rc1 is the release preview of the second maintenance release of Python 3.7. Python. 3.6.8rc1 is the release preview of the eighth and last maintenance release of Python 3.6. These latest releases include the addition of new features. Key Updates in Python 3.7.2rc1 A new C API for thread-local storage has been added in Python 3.7.2rc1. A new Thread Specific Storage (TSS) API has been added to CPython which takes over the existing TLS API within the CPython interpreter while removing the existing API. Deterministic .pyc files have been added. These .pyc files are called “hash-based”. Python still uses timestamp-based invalidation by default and does not generate hash-based .pyc files at runtime. Hash-based .pyc files can be generated with py_compile or compileall. Core support added in Python 3.7.2rc1 for typing module and generic types. Customized access to module attributes is allowed, meaning you can now define __getattr__() on modules and can call it whenever a module attribute is not found. Defining __dir__() on modules is also allowed. DeprecationWarning handling has been improved. The insertion-order preservation nature of dict objects has now become an official part of the Python language spec. Key Updates in Python 3.6.8rc1 Preserving Keyword Argument order has been added in Python 3.6.9rc1, meaning that **kwargs in a function signature is now guaranteed to be an insertion-order-preserving mapping. Python 3.6.8rc1 offers simple customization of subclass creation without using a metaclass.The new __init_subclass__ classmethod gets called on the base class when a new subclass is created. A new “secrets” module has been added to the standard library that reliably generates cryptographically strong pseudo-random values suited for managing secrets like account authentication, tokens, etc. A frame evaluation API has been added to CPython that makes frame evaluation pluggable at the C level. This allows debuggers and JITs to intercept frame evaluation before Python code execution begins. Python 3.6.8rc1 offers formatted string literals or f-strings. Formatted string literals work similarly to the format strings accepted by str.format(). They comprise replacement fields that are surrounded by curly braces. The replacement fields are expressions, that are evaluated at run time, and formatted using the format() protocol. For more information, check out the official release notes for Python 3.7.2rc1 and 3.6.8rc1. Python 3.7.1 and Python 3.6.7 released IPython 7.2.0 is out! SatPy 0.10.0, python library for manipulating meteorological remote sensing data, released
Read more
  • 0
  • 0
  • 15235
Modal Close icon
Modal Close icon