Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-nativescript-6-0-releases-with-nativescript-appsync-tabview-dark-theme-and-much-more
Amrata Joshi
19 Jul 2019
2 min read
Save for later

NativeScript 6.0 releases with NativeScript AppSync, TabView, Dark theme and much more!

Amrata Joshi
19 Jul 2019
2 min read
Yesterday, the team behind NativeScript announced the release of NativeScript 6.0. This release features faster delivery of patches with the help of NativeScript AppSync and it comes with the NativeScript Core Theme that works for all NativeScript components. This release comes with an improved TabView that enables common scenarios without custom development. NativeScript 6.0 comes with support for AndroidX and Angular 8. https://twitter.com/ufsa/status/1151755519062958081 Introducing NativeScript AppSync Yesterday, the team also introduced NativeScript AppSync which is a beta service that enables users to deliver a new version of their application instantly. Users can have a look at the demo here: https://youtu.be/XG-ucFqjG6c Core Theme v2 and Dark Theme The NativeScript Core Theme provides common UI infrastructure for building consistent and good-looking user interface. The team is also introducing a Dark Theme that comes with the skins of the Light Theme.  Kendo Themes  The users who are using the Kendo components for their web applications can now reuse their Kendo theme in NativeScript. They can also use the Kendo Theme Builder for building a new theme for their NativeScript application.  Plug and play With this release, the NativeScript Core Theme is now completely plug and play. Users can now manually set classes to their components and can easily install the theme. TabView All the components of the TabView are now styleable and also the font icons are now supported. Users can now have multiple TabView components that are nested, similar to having tabs and bottom navigation on the same page. These new capabilities are still in beta. Bundle Workflow With NativeScript 6.0, the NativeScript CLI will now support the Bundle Workflow, a single unified way for building applications. Hot Module Replacement (HMR) is also enabled by default and users can disable it by providing the `--no-hmr` flag to the executed command. To know more about this news, check out the official blog post. NativeScript 5.0 released with code sharing, hot module replacement, and more! JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Nativescript 4.1 has been released  
Read more
  • 0
  • 0
  • 6757

article-image-microsoft-announces-new-dual-screen-device-surface-neo-and-windows-10-x-to-be-launched-next-year
Vincy Davis
03 Oct 2019
3 min read
Save for later

Microsoft announces new dual-screen device Surface Neo and Windows 10 X, to be launched next year

Vincy Davis
03 Oct 2019
3 min read
Yesterday, Microsoft made a list of announcements at their annual Surface hardware event in New York. While the event included interesting announcements such as the new surface PRO 7, surface earbuds and more, the main highlights were the new dual-screen Microsoft Surface Neo and Windows 10 X. The dual-screen Surface Neo is expected to go on sale in 2020, before the holiday season. To enable the smooth working of dual devices, Microsoft has exclusively redesigned Windows 10 version to present the Windows 10 X, also known by the codename “Santorini”. In a statement to Verge, Joe Belfiore, head of Windows experiences said, “We see people using laptops. We see people using tablets. We saw an opportunity both at Microsoft and with our partners to fill in some of the gaps in those experiences and offer something new.” At the event, Microsoft said that they are announcing the new hardware and software early to help developers come up with exclusive applications ahead of the launch. It also added that Windows 10 X is not a new operating system, but just a more adaptable format of it. This means that Windows 10 X will only be available on dual-screen devices and not as a standalone copy of Windows 10 X. Read Also: A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report The Surface Neo device has two separate 9-inch displays that can fold out to a full 13-inch workspace. It has an intricate hinge which will allow the device to switch into a variety of modes. The device also has a Bluetooth keyboard that flips, slides, and locks into place with magnets that can be stored and secured to the rear of the device. Surface Neo also has a Surface Slim Pen, which gets attached magnetically. Microsoft has maintained that the device is not completely ready and more developments can be expected by the time of its launch. Microsoft may modularize the Windows 10 core technology and use the Start menu to display that in HoloLens. Similarly, Windows 10 X can also put the taskbar or Start menu on either panel as needed. Users will also be able to use the Start menu on either panel, depending on the work they are doing on either of the two panels. Read Also: Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Windows 10 X could be used in a number of ways like note-taking, mobile presentation, portable all-in-one, laptop, and reading. The engineering lead on Windows 10X, Carmen Zlateff says, “We’re working to take the best of the applications that people need and use most — things like Mail, Calendar, and PowerPoint — and bring them over to dual screens in a way that creates flexible and rich experiences that are unique to this OS and devices.” He further adds, “Our goal is that the vast majority of apps in the Windows Store will work with Windows 10X.” Users are excited about the new Surface Neo and Windows 10 X announcements. https://twitter.com/RoguePlanetoid/status/1179447805036941312 https://twitter.com/BenBajarin/status/1179414618021748737 TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more! How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched vulnerability in NSA’s Ghidra allows a remote attacker to compromise exposed systems
Read more
  • 0
  • 0
  • 6731

article-image-introducing-wapm-a-package-manager-for-webassembly
Bhagyashree R
24 Apr 2019
2 min read
Save for later

Introducing WAPM, a Package Manager for WebAssembly

Bhagyashree R
24 Apr 2019
2 min read
Yesterday, Syrus Akbary, the Founder and CEO of Wasmer, announced the release of WebAssembly Package Manager (WAPM). This package manager is introduced to make it easier for developers to use WebAssembly anywhere. Why WAPM is being introduced? With this package manager, the Wasmer team aims to improve the developer ergonomics of WebAssembly. Explaining the advantages of WAPM, Akbary said, “WebAssembly is an abstraction on top of chipset instructions, this enables wasm modules to run very easily on any machine. If we move this abstraction up we can unlock the potential of having universal binaries that can run anywhere, even on platforms/chipsets not supported at the moment of releasing the binary.” We already have Node.js Package Manager (NPM) that hosts WebAssembly modules. However, the team believes that WebAssembly on the server-side is a different use case and hence deserves a package manager specifically designed for it. What are its advantages? Following are the advantages that WAPM comes brings in: Allows developers to easily publish, download, and use WebAssembly modules. Allows developers to easily define commands on top of Wasm. Enables developers to create universal libraries in WebAssembly that can be used from all languages including Python, PHP, JavaScript, Rust, C, and C++. Enables support for different ABIs (Application Binary Interface) such as WASI, Emscripten, or even new ones in the future. This release comes with a ‘wapm’ command-line application and a website and package registry: wapm.io. Here’s a video demonstrating how WAPM works: https://www.youtube.com/watch?v=qDRcHIBFu0c After seeing the governance issues with NPM, many developers are skeptical about another private community package manager “I've grown uncomfortable with NPM being operated by NPM Inc instead of The Node.js Foundation, but it's a hard thing to change once it's established. We should hesitate to support the establishment of another community package manager by a for-profit company,” a Redditor said. To know more in detail, check out the official announcement. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Fastly open sources Lucet, a native WebAssembly compiler and runtime How you can replace a hot path in JavaScript with WebAssembly
Read more
  • 0
  • 0
  • 6648

article-image-introducing-cue-an-open-source-data-constraint-language-that-merges-types-and-values-into-a-single-concept
Bhagyashree R
03 Sep 2019
4 min read
Save for later

Introducing CUE, an open-source data constraint language that merges types and values into a single concept

Bhagyashree R
03 Sep 2019
4 min read
Inspired by Google’s General Configuration Language (GCL), a team of developers has now come up with a new language called CUE. It is an open-source data validation language, which aims to simplify tasks that involve defining and using data. Its applications include data validation, data templating, configuration, querying, code generation and even scripting. There are two core aspects of CUE that set it apart from other programming or configuration languages. One, it considers types as values and second, these values are ordered into a lattice, a partially ordered set. Explaining the concept behind CUE the developers write, “CUE merges the notion of schema and data. The same CUE definition can simultaneously be used for validating data and act as a template to reduce boilerplate. Schema definition is enriched with fine-grained value definitions and default values. At the same time, data can be simplified by removing values implied by such detailed definitions. The merging of these two concepts enables many tasks to be handled in a principled way.” These two properties account for the various advantages CUE provides: Advantages of using CUE Improved typing capabilities Most configuration languages today focus mainly on reducing boilerplate and provide minimal typing support. CUE offers “expressive yet intuitive and compact” typing capabilities by unifying types and values. Enhanced readability It enhances readability by allowing the application of a single definition in one file to values in many other files. So, developers need not open various files to verify validity. Data validation You get a straightforward way to define and verify schema in the form of the ‘cue’ command-line tool. You can also use CUE constraints to verify document-oriented databases such as Mongo. Read also: MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more! Easily validate backward compatibility With CUE, you can easily verify whether a newer version of the schema is backward compatible with an older one. CUE considers an API backward compatible if it subsumes the older one or if the old one is an instance of the new one. Allows combining constraints from different sources CUE is commutative, which means you can combine constraints from various sources such as base template, code, client policies, and that too in any order. Allows normalization of data definitions Combining constraints from many resources can also result in a lot of redundancy. CUE’s logical inference engine addresses this by automatically reducing constraints. Its API allows computing and selecting between different normal forms to optimize for a certain representation. Code generation and extraction Currently, CUE can extract definitions from Go code and Protobuf definitions. It facilitates the use of existing sources or smoother transition to CUE by allowing the annotation of existing sources with CUE expressions. Querying data CUE constraints can be used to find patterns in data. You can perform more elaborate querying by using a ‘find’ or ‘query’ subcommand. You can also query data programmatically using the CUE API. On a Hacker News discussion about CUE, many developers compared it with Jsonnet, which a data templating language. A user wrote, “It looks like an alternative to Jsonnet which has schema validation & strict types. IMO, Jsonnet syntax is much simpler, it already has integration with IDEs such as VSCode and Intellij and it has enough traction already. Cue seems like an e2e solution so it's not only an alternative to Jsonnet, it also removes the need of JSON Schema, OpenAPI, etc. so given that it's a 5 months old project, still has too much time to evolve and mature.” Another user added, “CUE improves in Jsonnet in primarily two areas, I think: Making composition better (it's order-independent and therefore consistent), and adding schemas. Both Jsonnet and CUE have their origin in GCL internally at Google. Jsonnet is basically GCL, as I understand it. But CUE is a whole new thing.” Others also praised its features. “When you consider the use of this language within a distributed system it's pretty freaking brilliant,” a user commented. Another user added, “I feel like that validation feature could theoretically save a lot of people that occasional 1 hour of their time that was wasted because of a typo in a config file leading to a cryptic error message.” Read more about CUE and its concepts in detail, on its official website. Other news in Programming languages ‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!
Read more
  • 0
  • 0
  • 6529

article-image-programming-news-bulletin-thursday-26-april
Richard Gall
26 Apr 2018
2 min read
Save for later

Programming news bulletin - Thursday 26 April

Richard Gall
26 Apr 2018
2 min read
Welcome to this week's programming news bulletin. There's some interesting news about Qt, and just weeks after the release of Java 10, you can check out the upcoming specs for Java 11. Programming news from the Packt Hub JetBrains ReSharper Ultimate 2018.1 is now available for download Programming news from across the the web Java 11 Specifications out for public review. Java's new release cycle is bringing updates and changes at an impressive pace. You can now review what's going to be coming in Java 11. Oracle tell Java SE 8 users that they need to buy a licence from 2019. Introducing Quart, a Python web microframework based on Asyncio. Quart is a bit like Flask. But it's been built to tackle one of Flask's few weaknesses - the fact that it is incompatible with Asyncio. Qt comes to the browser. The popular UI tool is now interoperable with WebAssembly. That should help Qt become more popular with JavaScript developers. It hasn't yet captured the web development market, but perhaps this is a step in that direction. GitLab 10.7 has been released. Vcpkg: Microsoft's C++ library manager is now available on Linux, macOS and Windows. Warehouse is the next generation Package Manager for Python. Scala 2.1.4 and Scala 3 roadmap out. Kotlin 1.2.40 is out!
Read more
  • 0
  • 0
  • 6476

article-image-googles-global-coding-competitions-code-jam-hashcode-and-kick-start-come-together-on-a-single-website
Amrata Joshi
26 Nov 2018
3 min read
Save for later

Google’s global coding competitions, Code Jam, HashCode and Kick Start come together on a single website

Amrata Joshi
26 Nov 2018
3 min read
Last week, Google brought the popular coding competitions Code Jam, HashCode and Kick Start together on a single website. This brand new UI will make the navigation better to make it user friendly. The user profile will now show notifications which will make the user experience better. Code Jam Google’s global coding competition, Code Jam, gives an opportunity to programmers around the world to solve tricky algorithmic puzzles. The first round includes three sub rounds. Next, the top 1,500 participants from each sub-round then get a chance to compete for a spot in round 2. Top 1,000 contestants are chosen out of them and they get an opportunity to move to the third round. Top 25 contestants will get selected from the third round and they will compete for the finals. The winners get the championship title and $15,000. HashCode HashCode is a team-based programming challenge organized by Google for students and professionals around the world. After registering for the contest, the participants will get an access to the Judge System. The Judge System is an online platform where one can form the team, join a hub, practice, and compete during the rounds. One can choose their team and programming language and the HashCode team assigns an engineering problem to the teams by live streaming on Youtube. The teams can compete either from a local hub or any another location of their choice. The selected teams will compete for the final round at Google’s office. Kick Start Kick Start, also a global online coding competition, consists of a variety of algorithmic challenges designed by Google engineers. Participants can either participate in one of the online rounds or in all of them. The top participants will get a chance to be interviewed at Google. The best part about KickStart is that it is open to all participants and there is no pre-qualification needed. If you are competing in a coding competition for the first time, then KickStart is the best option. What can you expect with this unified interface? Some good competition and some amazing insights coming from each of the rounds. Personalized certificate of completion. A chance to practice coding and experience new challenges A lot of opportunities To stay updated with the registration dates and details, one can sign up on Google’s coding competition’s official page. To know more about the competitions, check out Google’s blog. Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax” Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Google Dart 2.1 released with improved performance and usability
Read more
  • 0
  • 0
  • 6463
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-redhat-takes-over-stewardship-for-the-openjdk-8-and-openjdk-11-projects-from-oracle
Sugandha Lahoti
18 Apr 2019
2 min read
Save for later

RedHat takes over stewardship for the OpenJDK 8 and OpenJDK 11 projects from Oracle

Sugandha Lahoti
18 Apr 2019
2 min read
Yesterday, Red Hat announced that it will serve as a steward of OpenJDK 8 and OpenJDK 11, following the transition from Oracle. “With this transition”, says Red Hat, “we are affirming our support of the Java community and following a similar path that led to its leadership of both the OpenJDK 6 and OpenJDK 7 projects.” At the end of January 2019, Oracle officially ended free public updates to Oracle JDK for non-Oracle customer commercial users. These users will no longer be able to get updates without an Oracle support contract. Additionally, Oracle has changed the Oracle JDK license (BCPL) so that commercial use for JDK 11 and beyond will require an Oracle subscription. The leadership transfer actually happened mid-February, when Red Hat's Java technical lead, Andrew Haley was assigned as the Lead Maintainer in the JDK 8u Updates and JDK 11u Updates projects in OpenJDK. While Andrew is the lead whose job is to accept backports, the actual people who do the backporting work span multiple companies: Red Hat, SAP, Oracle, Amazon, Google, etc. These vendors share the backporting, reviewing, testing work. OpenJDK 8u, 11u Update Projects is where that collaboration happens, and Red Hat is leading that collaboration. Additionally, in December 2018, Red Hat announced commercial support for OpenJDK on Microsoft Windows. Red Hat plans to launch OpenJDK in a Microsoft installer in the coming weeks and distribute IcedTea-Web, the free software implementation of Java Web Start, as part of the Windows OpenJDK distribution. Mike Piech, vice president, and general manager, Middleware, Red Hat notes, " There is a developer hunger to bring Java into the next generation of development, and Red Hat is a leader in this movement through our involvement in the OpenJDK project. We are helping to lead the way in our efforts to enable users of JDK to have support and innovation in their existing environments. Red Hat remains committed to Java and is excited to have the opportunity to help steward the OpenJDK community." The OpenJDK Transition: Things to know and do Mark Reinhold on the evolution of Java platform and OpenJDK RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications.
Read more
  • 0
  • 0
  • 6399

Matthew Emerick
23 Sep 2020
1 min read
Save for later

Microsoft R Open 4.0.2 now available from Revolutions

Matthew Emerick
23 Sep 2020
1 min read
Microsoft R Open 4.0.2 has been released, combining the latest R language engine with multi-processor performance and tools for managing R packages reproducibly. You can download Microsoft R Open 4.0.2 for Windows,and Linux from MRAN now. Microsoft R Open is 100% compatible with all R version 4 scripts and packages, and works with all your favorite R interfaces and development environments. This update brings R version 4 to Microsoft R Open for the first time. This update includes many new features for the R language and system, along with improved performance and memory usage. Note that you will need to install any R packages you wish to use with MRO 4.0.2. By default, MRO installs packages from a static CRAN snapshot taken on July 16 2020. We hope you find Microsoft R Open useful, and if you have any comments or questions please visit the Microsoft R Open forum. You can follow the development of Microsoft R Open at the MRO GitHub repository. To download Microsoft R Open, simply follow the link below. MRAN: Download Microsoft R Open
Read more
  • 0
  • 0
  • 6324

article-image-how-we-debug-feathers-apis-using-postman-from-dailyjs-medium
Matthew Emerick
22 Sep 2020
5 min read
Save for later

How we debug Feathers APIs using Postman from DailyJS - Medium

Matthew Emerick
22 Sep 2020
5 min read
Photo by Alexander Sinn on Unsplash At Aquil.io, developing APIs and collaborating with clients walk hand in hand. Communication is essential in any consulting environment, and explaining information about a Feathers service is most effectively done with executable code, close to the metal. To control a test and increase the reliability of the result, it is important to remove as many moving parts as possible. We leverage this philosophy in all parts of our workflow. Postman is a tool that allows us to create predictable and consistent requests against APIs. It is a tool we regularly use in debugging APIs. To aid in collaboration with client development teams, we create a repository of requests in Postman, allowing us to then export and share requests in various formats, such as curl. For this article’s purposes, we assume a Feathers API is being developed. When looking to test or debug with Postman, be sure to configure the Feathers REST transport. Postman does not currently work with WebSockets. However, RESTful and WebSocket requests execute the same hooks and service methods due to the Feathers transport abstractions. Create the workspace Though requests can be created in the default workspace, in practice, this can become an unmaintainable list quickly. We create a workspace to group requests by project. Personal workspaces can always be converted to team workspaces later. There is an in-depth article on workspaces on the official Postman blog. Leveraging variables and environments APIs we develop fall into a few environments, depending on the project. Commonly, this is categorized as local, staging, and production. We run requests against different environments using the environment switcher. Within our requests, we use variables defined at the environment level and updated based on the environment we select. In most scenarios we use two variables, host and token. The request, in most cases, will require authentication. There are several options that authenticate a request with the service. The chosen authorization will modify the request with the appropriate headers or preflight calls. We use the Bearer Token option and apply the token for the request. We create a new environment in Postman named after the environment itself. We add a host variable and an authentication token variable, naming it token. We create environment representations and define the appropriate variables. Create collections and requests To us, collections are synonymous with Feathers resources. Treating a collection as a representation of a resource allows for an intuitive grouping of requests. We create a collection named after the Feathers resource. We select Bearer Token on the Authorization tab and apply the token variable to the Token field. We create a request within the appropriate collection, using the environment variables to build the final request. Each request we create uses the host token from the chosen environment and, for authentication, we select Inherit auth from parent. This will carry over settings for authorization from the parent collection. A request setup in this way allows us to run a request on any environment by changing the environment and sending the request. Naming the request is valuable for communicating intent. We consider the verb in our naming convention so it reads like plain English. Feathers closely ties service objects to HTTP verbs. This connection makes the title valuable in identifying and finding a request within a list of requests. Feathers conventions such as: becomes: Sharing requests When our team needs to collaborate on an endpoint, such as for debugging, Postman can easily export a curl request. A curl request is a simple and easily shared representation of the request. We prefer curl requests over alternatives such as sharing UI walkthroughs. Browser configuration, like caching or plugins, might interfere with the request. A UI application will accept a request’s response and derive an interface based on the data. The interface is not a one-to-one relation to the data. Collaborating around a curl request removes variability. Conclusion It is important to note that using Postman does not replace the need for tests. We borrow much of our testing philosophy from the testing pyramid and use Postman in tandem to accelerate our development process. Requests in Postman will execute hooks, middleware, service objects, and data stores. Postman effectively acts as an ad hoc integration test. At Aquil.io, we implement a deterministic strategy for processes. We look for efficient and straightforward solutions. When we integrate a tool into our workflow, we expect the tool to complement our strategy and improve productivity. Additionally, communication is a vital part of our strategy. Postman enables short feedback loops and communicates dense information succinctly making it an essential tool to our development process. Originally published at https://aquil.io on September 16, 2020. If you want to check out what I’m working on or have web development needs, visit Aquil.io. How we debug Feathers APIs using Postman was originally published in DailyJS on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 6116

article-image-python-3-9-0rc2-is-now-available-for-testing-from-python-insider
Matthew Emerick
17 Sep 2020
2 min read
Save for later

Python 3.9.0rc2 is now available for testing from Python Insider

Matthew Emerick
17 Sep 2020
2 min read
 Python 3.9.0 is almost ready. This release, 3.9.0rc2, is the last planned preview before the final release of Python 3.9.0 on 2020-10-05. Get it here: https://www.python.org/downloads/release/python-390rc2/ In the mean time, we strongly encourage maintainers of third-party Python projects to prepare their projects for 3.9 compatibility during this phase. As always, report any issues to the Python bug tracker. Please keep in mind that this is a preview release and its use is not recommended for production environments. Information for core developers The 3.9 branch is now accepting changes for 3.9.1. To maximize stability, the final release will be cut from the v3.9.0rc2 tag. If you need the release manager to cherry-pick any critical fixes, mark issues as release blockers and/or add him as a reviewer on a critical backport PR on GitHub. To see which changes are currently cherry-picked for inclusion in 3.9.0, look at the short-lived branch-v3.9.0 on GitHub. Installer news This is the first version of Python to default to the 64-bit installer on Windows. The installer now also actively disallows installation on Windows 7. Python 3.9 is incompatible with this unsupported version of Windows. Major new features of the 3.9 series, compared to 3.8 Some of the new major new features and changes in Python 3.9 are: PEP 584, Union Operators in dict PEP 585, Type Hinting Generics In Standard Collections PEP 593, Flexible function and variable annotations PEP 602, Python adopts a stable annual release cadence PEP 615, Support for the IANA Time Zone Database in the Standard Library PEP 616, String methods to remove prefixes and suffixes PEP 617, New PEG parser for CPython BPO 38379, garbage collection does not block on resurrected objects; BPO 38692, os.pidfd_open added that allows process management without races and signals; BPO 39926, Unicode support updated to version 13.0.0; BPO 1635741, when Python is initialized multiple times in the same process, it does not leak memory anymore; A number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall; A number of Python modules (_abc, audioop, _bz2, _codecs, _contextvars, _crypt, _functools, _json, _locale, operator, resource, time, _weakref) now use multiphase initialization as defined by PEP 489; A number of standard library modules (audioop, ast, grp, _hashlib, pwd, _posixsubprocess, random, select, struct, termios, zlib) are now using the stable ABI defined by PEP 384. More resources Online Documentation PEP 596, 3.9 Release Schedule Report bugs at https://bugs.python.org. Help fund Python and its community. Your friendly release team,Ned Deily @nadSteve Dower @steve.dowerŁukasz Langa @ambv
Read more
  • 0
  • 0
  • 6043
article-image-r-4-0-0-now-available-and-a-look-back-at-rs-history-from-revolutions
Matthew Emerick
27 Apr 2020
4 min read
Save for later

R 4.0.0 now available, and a look back at R's history from Revolutions

Matthew Emerick
27 Apr 2020
4 min read
R 4.0.0 was released in source form on Friday, and binaries for Windows, Mac and Linux are available for download now. As the version number bump suggests, this is a major update to R that makes some significant changes. Some of these changes — particularly the first one listed below — are likely to affect the results of R's calculations, so I would not recommend running scripts written for prior versions of R without validating them first. In any case, you'll need to reinstall any packages you were using for R 4.0.0. (You might find this R script useful for checking what packages you have installed for R 3.x.) You can find the full list of changes and fixes in the NEWS file (it's long!), but here are the biggest changes: Imported string data is no long converted to factors. The stringsAsFactors option, which since R's inception defaulted to TRUE to convert imported string data  to factor objects, is now FALSE. This default was probably the biggest stumbling block for prior users of R: it made statistical modeling a little easier and used a little less memory, but at the expense of confusing behavior on data you probably thought was ordinary strings. This change broke backward compatibility for many packages (mostly now updated on CRAN), and likely affects your own scripts unless you were diligent about including explicit stringsAsFactors declarations in your import function calls. A new syntax for specifying raw character strings. You can use syntax like r"(any characters except right paren)" to define a literal string. This is particularly useful for HTML code, regular expressions, and other strings that include quotes or backslashes that would otherwise have to be escaped. An enhanced reference counting system. When you delete an object in R, it usually releases the associated memory back to the operating system. Likewise, if you copy an object with y <- x, R won't allocate new memory for y unless x is later modified. In prior versions of R, however, that system breaks down if there are more than 2 references to any block of memory. Starting with R 4.0.0, all references will be counted, and so R should reclaim as much memory as possible, reducing R's overall memory footprint. This will have no impact on how you write R code, but this change make R run faster, especially on systems with limited memory and with slow storage systems. Normalization of matrix and array types. Conceptually, a matrix is just a 2-dimensional array. But prior versions of R handle matrix and 2-D array objects differently in some cases. In R 4.0.0, matrix objects will formally inherit from the array class, eliminating such inconsistencies. A refreshed color palette for charts. The base graphics palette for prior versions of R (shown as R3 below) features saturated colors that vary considerably in brightness (for example, yellow doesn't display as prominently as red). In R 4.0.0, the palette R4 below will be used, with colors of consistent luminance that are easier to distinguish, especially for viewers with color deficiencies. Additional palettes will make it easy to make base graphics charts that match the color scheme of ggplot2 and other graphics systems. Performance improvements. The grid graphics system has been revamped (which improves the rendering speed of ggplot2 graphics in particular), socket connections are faster, and various functions have been sped up.  Cairo graphics devices have been updated to support more fonts and symbols, an improvement particularly relevant to Linux-based users of R. R version 4 represents a major milestone in the history of R. It's been just over 20 years since R 1.0.0 was released on February 29 2000, and the history of R extends even further back than that. If you're interested in the other major milestones, I cover R's history in this recent talk for the SatRDays DC conference. For the details on the R 4.0.0 release, including the complete list of changes, check out the announcement at the link below. R-announce archives: R 4.0.0 is released
Read more
  • 0
  • 0
  • 5835

article-image-netnewswire-5-0-releases-with-dark-mode-smart-feed-article-list-three-pane-design-and-much-more
Amrata Joshi
02 Sep 2019
3 min read
Save for later

NetNewsWire 5.0 releases with Dark mode, smart feed article list, three-pane design and much more!

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team behind NetNewsWire released NetNewsWire 5.0, a free and open-source RSS reader for Mac. NetNewsWire lets users read articles from their favorite blogs and news sites and keeps a track of what users have already read. So, users need not switch from page to page for reading new articles, instead, NetNewsWire would provide them with a list of new articles. In 2002, NetNewsWire started as Brent Simmons’ project which was sold in 2005 and again in 2011. Simmons finally re-acquired NetNewsWire from Black Pixel last year, and relaunched it as version 5 this year.  Previously, when NetNewsWire began as a project, it was named as “Evergreen” but later on became NetNewsWire in 2018. In this release of NetNewsWire 5.0, JSON Feed support, syncing via Feedbin, Dark Mode, a “Today” smart feed, starred articles, and more such features are included.  Key features included in NetNewsWire 5.0 Three pane-design As per the image given below, NetNewsWire 5.0 features a common three-pane design where the users’ feed and folders are on the extreme left hand side. The article lists for each of the feeds lie in the middle column, and the readers can view the article in the right column. Image Source: The Sweet Setup Dark mode NNW 5 comes with a light and dark mode that ensures it fits well with macOS’s dark mode support. New buttons The buttons have a design which is similar to the Mac design. This version features buttons that can be used for creating a new folder, sending an article to Safari or marking an article as unread. Smart feed article list  The Smart feed article list features the article title, feed’s icon, a short description from the article, as well as the time the article was published, and the publisher’s name. The “Today” smart feed list shows articles that got published in the last 24 hours instead of the articles that were published post midnight on the current date. Unread articles The unread articles in a feed are marked with a bright blue dot and users can double-click an article in the article list to open it directly in Safari. Keyboard shortcuts Users can now mark all articles in a given feed as “read” by pressing CMD + K. Users can now jump between their smart feeds with the combination of CMD + 1/2/3. Users can also jump to the browser by simply hitting CMD + right arrow key. By hitting the spacebar, users can jump through an article.  What is expected in the future? Support for more services NetNewsWire supports only its own local RSS service and Feedbin. And currently, the local RSS service doesn’t support syncing to any other service. Support for more services is expected in the future.  Read-It-Later Support Apps like Reeder and Fiery Feeds (on iOS) are working on their own read-it-later features as of late and NetNewsWire 5 doesn’t support such kind of feature. iOS version The team is currently working on the iOS version of NetNewsWire. It seems users are overall excited about this release. A user commented on HackerNews, “This looks very good, I'm just waiting for Feedly compatibility.” To know more about this news, check out the official post. What’s new in application development this week? Twilio launched Verified By Twilio, that will show customers who is calling them and why Emacs 26.3 comes with GPG key for GNU ELPA package signature check and more! Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks
Read more
  • 0
  • 0
  • 5768

article-image-an-introduction-to-testing-in-javascript-from-dailyjs-medium
Matthew Emerick
17 Sep 2020
1 min read
Save for later

An Introduction to testing in Javascript from DailyJS - Medium

Matthew Emerick
17 Sep 2020
1 min read
Today, we are going to discuss testing in Javascript and help you in starting your journey towards understanding and mastering it. Continue reading on DailyJS »
Read more
  • 0
  • 0
  • 5649
article-image-how-oracle-v-google-could-upend-software-development-from-infoworld-java
Matthew Emerick
07 Oct 2020
1 min read
Save for later

How Oracle v. Google could upend software development from InfoWorld Java

Matthew Emerick
07 Oct 2020
1 min read
Oracle v. Google has been winding its way through courts for a decade. You’ve probably already heard that the high-profile legal case could transform software engineering as we know it — but since nothing ever seems to happen, it’s forgivable if you’ve made a habit of tuning out the news. It might be time to tune back in. The latest iteration of the case will be heard by the U.S. Supreme Court in the 2020-2021 season, which began this week (after being pushed back due to coronavirus concerns). The decision of the highest court in the land can’t be overturned and is unlikely to be reversed, so unlike previous decisions at the district and circuit court level, it would stick for good. And while the case is being heard in the U.S., the decision would impact the entire global tech industry. To read this article in full, please click here
Read more
  • 0
  • 0
  • 5611

article-image-stack-exchange-migrates-to-net-entity-framework-core-ef-core-stack-overflow-to-follow-soon
Savia Lobo
08 Oct 2018
2 min read
Save for later

Stack Exchange migrates to .NET Entity Framework Core (EF Core), Stack Overflow to follow soon

Savia Lobo
08 Oct 2018
2 min read
Last week, Nick Craver, Architecture Lead for Stack Overflow, announced that Stack Exchange is migrating to .NET Entity Framework Core (EF Core) and seek help from users to test the EF Core. The Stack Exchange community has deployed a major migration from its previous Linq-2-SQL to EF Core. Following this, Stack Overflow may also get a partial tier to deploy later today. In his post, Nick said, “Along the way we have to swap out parts that existed in the old .NET world but don't in the new.” Some changes in Stack Exchange and Stack Overflow post migration to .NET EF Core The Stack community said that they have safely diverged their Enterprise Q3 release. This means they work on one codebase for easier maintenance and the latest features will also be reflected in the .NET Entity Framework Core. Stack Overflow was written on top of a data layer called Linq-2-SQL. This worked well but had scaling issues following which the community replaced the performance critical paths with a library named as Dapper. However, the community said that until today, some old paths, mainly where they insert entries, remained on Linq-2-SQL. The community also stated that as a part of the migration, a few code paths went to Dapper instead of EF Core. This means Dapper wasn’t removed and still exists post migration. This migration may affect posts, comments, users, and other ‘primary’ object types in Q&A. Nick also added, “We're not asking for a lot of test data to be created on meta here, but if you see something, please say something!”. He further added, “The biggest fear with a change like this is any chance of bad data entering the database, so while we've tested this extensively and have done a few tests deploys already, we're still being extra cautious with such a central & critical change.” To know more about this in detail, head over to Nick Craver’s discussion thread on Stack Exchange. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Stack Overflow celebrates its 10th birthday as the most trusted developer community
Read more
  • 0
  • 0
  • 5543
Modal Close icon
Modal Close icon