Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-python-3-9-alpha-1-is-now-ready-for-testing
Vincy Davis
22 Nov 2019
3 min read
Save for later

Python 3.9 alpha 1 is now ready for testing

Vincy Davis
22 Nov 2019
3 min read
Three days ago, the team behind Python announced the release of Python 3.9.0a1, which is the first out of the six planned alpha releases of Python 3.9. The final stable version of Python 3.9 is slated to release in May 2020. An alpha release indicates that developers can start testing the new features and check for bug fixes but are not recommended to use it in production. Last month, the previous stable version, Python 3.8 was released with features like walrus operator, positional-only parameters support for Vectorcall. Read More: Core Python team confirms sunsetting Python 2 on January 1, 2020 Let’s look at some of the raw features that you can be expected in the upcoming Python 3.9 version. Some improvements introduced in Python 3.9.0a1 Language Changes The __import__() function which is invoked by the import statement will now raise ImportError instead of ValueError. In the previous versions, the latter used to occur when a relative import went past its top-level package. Starting from Python 3.9.0a1, the absolute path of the script filename will be specified on the command line: the __file__ attribute of the __main__ module. The sys.argv[0] and sys.path[0] will become an absolute path rather than a relative path. Also, the traceback will now display the absolute path for __main__ module frames in this case. The encoding and errors arguments in the debug build and development mode will now be checked in the string encoding and decoding operations. Improved Modules ast: It is added in the indent option to dump() and produces a multi-line indented output. asyncio: It can now use coroutine which is a generalized form of subroutines. Subroutines enter and exit at only two different points, while coroutines can be entered, exited, and resumed at many points. Moreover, asyncio.run() is updated to use the new coroutine. New functions like curses.get_escdelay(), curses.set_escdelay(), curses.get_tabsize(), and curses.set_tabsize() and constants F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW is included in Python 3.9.0a1. Few Python users have already started testing the Python 3.9.0a1 release. https://twitter.com/codewithanthony/status/1197559895744110592 The next alpha release for Python 3.9 is scheduled for 16th December 2019. To know more about Python 3.9.0a1, check out the official documentation. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track Severity issues raised for Python 2 Debian packages for not supporting Python 3 Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3
Read more
  • 0
  • 0
  • 27821

article-image-qml-net-a-new-c-library-for-cross-platform-net-gui-development
Prasad Ramesh
10 Aug 2018
3 min read
Save for later

Qml.Net: A new C# library for cross-platform .NET GUI development

Prasad Ramesh
10 Aug 2018
3 min read
Qml.Net is a C# library for cross-platform GUI development with native dependency. It exposes the required object types to host a QML engine. In Qml.NET, Qml and JavaScript together form the UI layer. It can be thought of as the view in MVC. Qml.Net features The PInvoke code in this .NET library is hand-crafted by developer Paul Knopf to ensure appropriate memory management and pointer ownership semantics. He is pretty confident about the library and mentions in his blog “I’d bet you couldn’t generate a segfault, even if you wanted to.” In Qml.Net C# objects can be registered to be treated as QML components. You can then interoperate with them as you would with regular JavaScript objects. The registered C# objects serve as a portal through which the QML world can interact with your .NET objects. This has an added benefit of keeping your business/UI concerns separate cleanly. There will also be no chatty PInvoke calls for rendering. It is a great match. A pre-compiled portable installation of Qt and the native C wrapper is available for Windows, OSX, and Linux. Developers wouldn’t have to bother with C/C++. All you need to know is QML, C#, and JavaScript; QML if fairly simple. QML can’t really be classified as a language, in the semantic sense. More appropriately it can be considered as a combination of JSON and JavaScript. Qml.Net support and working Qml.Net will work with any .NET language including popular C# and functional languages like F#. Your libraries will reference the pure .NET NuGet package, Qml.Net. The host process (Program.Main) references the native NuGet packages. This is dependent on the OS you are on: Qml.Net.WindowsBinaries Qml.Net.OSXBinaries Qml.Net.LinuxBinaries Paul currently only tests his own models that are C# objects registered with the QML engine. They are specific to each control/page. Since Microsoft's announcement of .NET Core, there hasn’t been any clear idea on cross-platform GUI development. Although Microsoft plans to support WPF in .NET Core 3.0, it will be limited to Windows machines. With community involvement and support, Qml.net can be a potential game changer. You can head to the GitHub repository and also view some hosted examples to get a better idea. Read next Exciting New Features in C# 8.0 .NET Core completes move to the new compiler – RyuJIT Microsoft Azure's new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 27820

article-image-unity-2019-2-releases-with-updated-probuilder-shader-graph-2d-animation-burst-compiler-and-more
Fatema Patrawala
31 Jul 2019
3 min read
Save for later

Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more

Fatema Patrawala
31 Jul 2019
3 min read
Yesterday, the Unity team announced the release of Unity 2019.2. In this release, they have added more than 170 new features and enhancements for artists, designers, and programmers. They have updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler, UI Elements, and many more. Major highlights for Unity 2019.2 ProBuilder 4.0 ships as verified with 2019.2. It is a unique hybrid of 3D modeling and level design tools, optimized for building simple geometry but capable of detailed editing and UV unwrapping as needed. Polybrush is now available via Package Manager as a Preview package. This versatile tool lets you sculpt complex shapes from any 3D model, position detail meshes, paint in custom lighting or coloring, and blend textures across meshes directly in the Editor. DSPGraph is the new audio rendering/mixing system, built on top of Unity’s C# Job System. It’s now available as a Preview package. They have improved on UI Elements, Unity’s new UI framework, which renders UI for graph-based tools such as Shader Graph, Visual Effect Graph, and Visual Scripting. To help you better organize your complex graphs, Unity has added subgraphs to Visual Effect Graph. You can share, combine, and reuse subgraphs for blocks and operators, and also embed complete VFX within VFX. There is an improvement in the integration between Visual Effect Graph and the High-Definition Render Pipeline (HDRP), which pulls VFX Graph in by default, providing you with additional rendering features. With Shader Graph you can now use Color Modes to highlight nodes on your graph with colors based on various features or select your own colors to improve readability. This is especially useful in large graphs. The team has added swappable Sprites functionality to the 2D Animation tool. With this new feature, you can change a GameObject’s rendered Sprites while reusing the same skeleton rig and animation clips. This lets you quickly create multiple characters using different Sprite Libraries or customize parts of them with Sprite Resolvers. With this release Burst Compiler 1.1 includes several improvements to JIT compilation time and some C# improvements. Additionally, the Visual Studio Code and JetBrains Rider integrations are available as packages. Mobile developers will benefit from improved OpenGL support, as the team has added OpenGL multithreading support (iOS) to improve performance on low-end iOS devices that don’t support Metal. As with all releases, 2019.2 includes a large number of improvements and bug fixes. You can find the full list of features, improvements, and fixes in Unity 2019.2 Release Notes. How to use arrays, lists, and dictionaries in Unity for 3D game development OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more! How to manage complex applications using Kubernetes-based Helm tool [Tutorial]
Read more
  • 0
  • 0
  • 27616

article-image-php-8-and-7-4-to-come-with-just-in-time-jit-to-make-most-cpu-intensive-workloads-run-significantly-faster
Bhagyashree R
01 Apr 2019
3 min read
Save for later

PHP 8 and 7.4 to come with Just-in-time (JIT) to make most CPU-intensive workloads run significantly faster

Bhagyashree R
01 Apr 2019
3 min read
Last week, Joe Watkins, a PHP developer, shared that PHP 8 will support the Just-in-Time (JIT) compilation. This decision was the result of voting among the PHP core developers for supporting JIT in PHP 8 and also in PHP 7.4 as an experimental feature. If you don’t know what JIT is, it is a compiling strategy in which a program is compiled on the fly into a form that’s usually faster, typically the host CPU’s native instruction set. To do this the JIT compiler has access to dynamic runtime information whereas a standard compiler doesn’t. How PHP programs are compiled? PHP comes with a virtual machine named the Zend VM. The human-readable scripts are compiled into instructions, which are called opcodes that are understandable to the virtual machine. Opcodes are low-level, and hence faster to translate to machine code as compared to the original PHP code. This stage of execution is called compile time. These opcodes are then executed by the Zend VM in the runtime stage. JIT is being implemented as an almost independent part of OPcache, an extension to cache the opcodes so that compilation happens only when it is required. In PHP, JIT will treat the instructions generated for the Zend VM as the intermediate representation. It will then generate an architecture dependent machine code so that the host of your code is no longer the Zend VM, but the CPU directly. Why JIT is introduced in PHP? PHP hits the brick wall Many improvements have been done to PHP since its 7.0 version including optimizations for HashTable, specializations in the Zend VM for certain opcodes, specializations in the compiler for certain sequences, and many more. After so many improvements, now PHP has reached the extent of its ability to be improved any further. PHP for non-Web scenarios Adding support for JIT in PHP will allow its use in scenarios for which it is not even considered today, i.e., in other non-web, CPU-intensive scenarios, where the performance benefits will be very substantial. Faster innovation and more secure implementations With JIT support, the team will be able to develop built-in functions in PHP instead of C without any huge performance penalty. This will make PHP less susceptible to memory management, overflows, and other similar issues associated with C-based development. We can expect the release of PHP 7.4 later this year, which will debut JIT in PHP.  Though there is no official announcement about the release schedule of PHP 8, many are speculating its release in late 2021. Read Joe Watkins’ announcement on his blog. PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Symfony leaves PHP-FIG, the framework interoperability group Google App Engine standard environment (beta) now includes PHP 7.2
Read more
  • 0
  • 0
  • 27475

article-image-facebook-mandates-visual-studio-code-as-default-development-environment-and-partners-with-microsoft-for-remote-development-extensions
Fatema Patrawala
21 Nov 2019
4 min read
Save for later

Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions

Fatema Patrawala
21 Nov 2019
4 min read
On Tuesday, Facebook mandates Visual Studio Code, the source code editor developed by Microsoft, as their default development environment. Additionally, they also stated that the company will work with Microsoft to expand the remote development extension for Visual Studio Code so that engineers can do large-scale remote development. As per the official announcement page, Facebook engineers have written millions of lines of codes and there is no mandated development environment. Till now Facebook developers used Vim or Emacs  and the development environment was disjointed. And certain developers also used Nuclide, an integrated development environment developed by Facebook. But in late 2018, they announced to their internal engineers that they would move Nuclide to Visual Studio Code. They have also done plenty of development work to migrate the current Nuclide functionality, along with new features to Visual Studio Code and currently it is used extensively across the company in beta. Why Visual Studio Code? The Visual Studio Code is a very popular development tool, with great support from Microsoft and the open source community. It runs on macOS, Windows, and Linux, and has a robust and well-defined extension API that enables to continue building the important capabilities required for the large-scale development done at Facebook. The company believes that it is a platform on which they can safely bet their development platform future. They have also partnered with Microsoft for remote development. At present, Facebook engineers install Visual Studio Code on a local PC, but the actual development is done directly on the development server in the data center. Therefore, it aims to improve efficiency and productivity by making the code on the server accessible in a seamless and high-performance manner. The company believes that using remote extensions will provide many benefits like: Work with larger, faster, or more specialized hardware than what’s available on local machine Create tailored, dedicated environments for each project’s specific dependencies, without worrying about errors due to mixed or conflicting configurations Support the flexibility of being able to quickly switch between multiple running development environments without impacting local resources or tool performance Facebook mandates Visual Studio Code as an integrated development environment which can be used internally, specifically, because Facebook uses various programming languages. It also uses Mercurial as the source control infrastructure, it will work on the development of extensions to allow direct source control operations within Visual Studio Code. Facebook states, “VS Code is now an established part of Facebook’s development future. In teaming with Microsoft, we’re looking forward to being part of the community that helps Visual Studio Code continue to be a world class development tool.” On Hacker News, developers are discussing various issues related to remote development extensions in VS Code, one of them is it is not open-source and Facebook should take efforts to make it an open project. One comment reads, “Just an FYI for people - The Remote Development extensions are not open source. I'd hope if Facebook were joining efforts, they'd do so on a more open project. 1: https://code.visualstudio.com/docs/remote/faq#_why-arent-the... 2: https://github.com/microsoft/vscode/wiki/Differences-between... 3: https://github.com/VSCodium/vscodium/issues/240 (aka, on-the-wire DRM to make sure the remote components only talk to a licensed VS Code build from Microsoft) MS edited the licensing terms many moons ago, to prepare for VS Code in browser using these remote extensions/apis that no one else can use)- https://github.com/microsoft/vscode/issues/48279 Finally, this is the thread where you will see regular users being negatively impacted by the DRM (a closed source, non-statically linked proprietary binary downloaded at runtime) that implements this proprietary-ness: https://github.com/microsoft/vscode-remote-release/issues/10... (of course, also with enough details to potentially patch around this issue if you were so inclined). Further, MS acknowledged that statically linking would help in May, and yet it appears to still be an issue. I just hope they don't come after Eclipse Theia…” Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 developers explain why they use Visual Studio Code [Sponsored by Microsoft] 5 useful Visual Studio Code extensions for Angular developers Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more
Read more
  • 0
  • 0
  • 27123

article-image-angular-thoughts-on-docs-from-angular-blog-medium
Matthew Emerick
18 Sep 2020
5 min read
Save for later

Angular Thoughts on Docs from Angular Blog - Medium

Matthew Emerick
18 Sep 2020
5 min read
Photo by Patrick Tomasso on Unsplash If you have visited the docs at angular.io lately, you might have noticed some changes in our content layout and structure. As the lead technical writer for Angular, I thought I’d take a moment to cover some of the main goals we have for making the Angular documentation experience the best experience possible. Focus on developers new to Angular A common pitfall for many documentation sets is that they address too many audiences at once. This practice results in content that is verbose, difficult to navigate, and frustrating to read. For Angular, it’s important that we focus on a single audience at a time, because we want to make sure to tell the right stories clearly and concisely. Right now, that means our documentation efforts focus on developers new to Angular. Since I joined the Angular team, I’ve heard a recurring theme: “Angular has a steep learning curve.” “The Angular documentation is overwhelming.” As someone new to Angular, I find myself agreeing with these sentiments. That’s why, for the next several months, we’re focusing on making the getting started experience the best experience possible. Some of the changes we’re making include: Revamping the table of contents (the lists of topics in the left navigation) to help users understand what main concepts of Angular they should understand, and what topics can wait until they want to expand their applications further. Categorizing topics into three topic types: Concepts, Tasks, and Tutorials. It’s important for any reader — whether they are new to Angular or not! — to know what kind of content they’re reading and whether it’s what they’re looking for. No one likes looking for how to get something done only to find themselves 3 paragraphs into a tutorial. Streamlining existing content. When I write, I imagine that I’m tutoring a friend who has a plane to catch in 5 minutes. This image helps me focus on content that is casual in tone, but also concise and to the point. Applying this idea to Angular documentation will help users find the information they need and get back to code. Help users get things done Developers need documentation for a lot of different reasons. Sometimes you need a basic walkthrough of the technology. Other times, you need a real-world tutorial that addresses a problem that you’re facing. Like focusing on a specific audience, a good documentation set focuses on one of these reasons at a time. For Angular, we’re focused on writing content that helps users get things done. When you navigate to the Angular documentation, we want you to find the information you need to complete a task or understand a feature, and then we want you to be able to get back to writing great code. To accomplish this goal, we’re going through all of the topics to make sure they clearly state what they cover and why that’s important. You should know right away if you’re reading the content you need. And if you’re not? We’re working on providing links to other topics that might be more helpful. Of course, there’s always a place for more conceptual content. And there’s a lot of great value in developing a deep understanding of how something works. I’m sure we’ll focus on improving our conceptual content at some point in the future. For the time being, however, we want to make sure that you can find the help you need quickly and easily. Improve, but don’t break As we mentioned earlier, many within the Angular community find the current documentation overwhelming. At the same time, we’ve also heard that the documentation remains one of the best places to learn how to build with Angular. That’s why one of the other key goals that we’re focusing on is to improve the documentation without breaking it. As we write new content or improve existing content, we try to make sure that the existing documentation remains intact. There are always going to be times where we might fall short on this goal. When that happens, please let us know by filing a GitHub issue so we can investigate. Conclusion We think that focusing on new Angular developers, writing content that helps you get things done, and making sure we improve the documentation without breaking it, will result in a better documentation experience for everyone. Of course, this is just the beginning. For example, as we wrap up content for new developers, we’ll start looking at other audiences, such as those of you working on enterprise-level applications. I can’t express how grateful I am to work on a product that has such a passionate, supportive community, and I look forward to working with all of you to make the Angular documentation the best experience possible. In the meantime, continue to check out the docs and don’t hesitate to let us know what you think! Angular Thoughts on Docs was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 27026
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-pypy-supports-python-2-7-even-as-major-python-projects-migrate-to-python-3
Fatema Patrawala
14 Aug 2019
4 min read
Save for later

PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3

Fatema Patrawala
14 Aug 2019
4 min read
The switch from Python 2 to Python 3 has been rocky and all signs point to Python 3 pulling firmly into the lead. Python 3 is broadly compatible with several libraries and there's an encouraging rate of adoption by cloud providers for application support too as Python 2 reaches its EOL in 2020. But there are still plenty of efforts to keep Python 2 alive in one form or another. The default implementation of Python is open source, so it can easily be forked and maintained separately. Currently all major open source Python packages support Python 3.x and Python 2.7. Last year Python team updated users that Python 2.7 maintenance will stop in 2020. Originally, there was no official date but in March 2018, the team announced the date to be January 1, 2020. https://twitter.com/ThePSF/status/1160839590967685121 This means that the maintainers of Python 2 will stop supporting it even for security patches. There are many institutions and codebases who have not yet ported their code from Python 2 to Python 3. Hence, Python volunteers have created resources to help publicize and educate, but there's still more work that needs to be done. For which the Python Software Foundation has contracted with Changeset Consulting, to help communicate about the sunsetting of Python 2. The high-level goal for Changeset's involvement is to help users through the end of the transition, help with communication so volunteers are not overwhelmed, and help update public-facing assets so core developers are not overwhelmed. This will also require all the major Python projects to migrate to Python 3 and above. However, PyPy confirmed last week that they do not plan to deprecate Python 2.7 support as long as PyPy exists, according to the official Twitter statement. https://twitter.com/pypyproject/status/1160209907079176192 Apart from this, PyPy runtime is popular among developers due to its built-in JIT which provides major speed boosts to Python code. Pypy has long favored Python 2 over Python 3. This favoritism isn't solely because the first versions of PyPy were Python 2 implementations and Python 3 has only recently entered the picture. It's also due to a key part of PyPy's ecosystem, RPython which is a dynamic language implementation framework has its foundation in Python 2. This is not likely to change, according to PyPy's official FAQ. The page states, “the Python 2 version of PyPy will be around 'forever', i.e. as long as PyPy itself is around.” According to Pypy’s official announcement it will support Python 3 while continuing to support Python 2.7 version. Last year when Python rolled out the announcement that Python 2 will officially end in 2020, users on Hacker News discussed about the most popular packages being compatible with Python 3 while millions of people in the industry still work on Python 2.7. One of the users comments read, “most popular packages are now compatible with Python 3 I often see this but I think it's a perception from the Internet/web world. I work for CGI, all (I'm not kidding) our software (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of people in the industry is 2.7. And we are a tiny focused industry. So I'm sure there are many other industries like us which are 2.7 that you never heard of. That's why "most popular" mean nothing once you take how Python is used as a whole. We don't use any of this web/Internet/network "popular" packages. I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, use none of those. GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more! NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more
Read more
  • 0
  • 0
  • 26785

article-image-announcing-net-5-0-rc-2-from-net-blog
Matthew Emerick
13 Oct 2020
12 min read
Save for later

Announcing .NET 5.0 RC 2 from .NET Blog

Matthew Emerick
13 Oct 2020
12 min read
Today, we are shipping .NET 5.0 Release Candidate 2 (RC2). It is a near-final release of .NET 5.0, and the last of two RCs before the official release in November. RC2 is a “go live” release; you are supported using it in production. At this point, we’re looking for reports of any remaining critical bugs that should be fixed before the final release. We also released new versions of ASP.NET Core and EF Core today. You can download .NET 5.0, for Windows, macOS, and Linux: Installers and binaries Container images Snap installer Release notes Known issues GitHub issue tracker You need the latest preview version of Visual Studio (including Visual Studio for Mac) to use .NET 5.0. .NET 5.0 includes many improvements, notably single file applications, smaller container images, more capable JsonSerializer APIs, a complete set of nullable reference type annotations, new target framework names, and support for Windows ARM64. Performance has been greatly improved, in the NET libraries, in the GC, and the JIT. ARM64 was a key focus for performance investment, resulting in much better throughput and smaller binaries. .NET 5.0 includes new language versions, C# 9 and F# 5.0. Check out some .NET 5.0 examples so you can try these features out for yourself. Today is an auspicious day because we’re kicking off the 2020 .NET@Microsoft internal conference. There will be many speakers from the .NET team, but also developers and architects from services teams that rely on .NET to power the Microsoft cloud, sharing their victories and also their challenges. I’m presenting (unsurprisingly) “What’s new in .NET 5.0”. My talk will be easy; I’ll just read the .NET 5.0 blog posts, preview by preview! It will be a great talk. More seriously, the conference is our opportunity to make the case why Microsoft teams should adopt .NET 5.0 soon after it is available. At least one large team I know of is running on RC1 in production. The official .NET Microsoft site has been running on .NET 5.0 since Preview 1. It is now running RC2. The case we’ll make to Microsoft teams this week is very similar to the case that I’ve intended to make to you across all of these .NET 5.0 blog posts. .NET 5.0 is a great release and will improve the fundamentals of your app. Speaking of conferences, please save the date for .NET Conf 2020. This year, .NET 5.0 will launch at .NET Conf 2020! Come celebrate and learn about the new release. We’re also celebrating our 10th anniversary and we’re working on a few more surprises. You won’t want to miss this one. Just like I did for .NET 5.0 Preview 8 and .NET 5.0 RC1, I’ve chosen a selection of features to look at in more depth and to give you a sense of how you’ll use them in real-world usage. This post is dedicated to C# 9 pattern matching, Windows ARM64, and ClickOnce. C# 9 Pattern Matching Pattern matching is a language feature was first added in C# 7.0. It’s best to let Mads reintroduce the concept. This is what he had to say when he originally introduced the feature. C# 7.0 introduces the notion of patterns, which, abstractly speaking, are syntactic elements that can test that a value has a certain “shape”, and extract information from the value when it does. That’s a really great description, perfectly worded. The C# team has added new patterns in each of the C# 7, C# 8, and C# 9 versions. In this post, you’ll see patterns from each of those language versions, but we’ll focus on the new patterns in C# 9. The three new patterns in C# 9 are: Relational patterns, using relational operators such as < and >=. Logical patterns, using the keywords and, or, and not. The poster child example is foo is not null. This type of pattern is most useful when you want to compare multiple things in one pattern. Simple type patterns, using solely a type and no other syntax for matching. I’m a big fan of the BBC Sherlock series. I’ve written a small app that determines if a given character should have access to a given piece of content within that series. Easy enough. The app is written with two constraints: stay true to the show timeline and characters, and be a great demonstration of patterns. If anything, I suspect I’ve failed most on the second constraint. You’ll find a broader set of patterns and styles than one would expect in a given app (particularly such a small one). When I’m using patterns, I sometimes want to do something subtly different than a pattern I’m familiar with achieves and am not sure how to extend that pattern to satisfy my goal. Given this sample, I’m hoping you’ll discover more approaches than perhaps you were aware of before, and can extend your repertoire of familiar patterns. There are two switch expressions within the app. Let’s start with the smaller of the two. public static bool IsAccessOKAskMycroft(Person person) => person switch { // Type pattern OpenCaseFile f when f.Name == "Jim Moriarty" => true, // Simple type pattern Mycroft => true, _ => false, }; The first two patterns are type patterns. The first pattern is supported with C# 8. The second one — Mycroft — is an example of the new simple type pattern. With C# 8, this pattern would require an identifier, much like the first pattern, or at the very least a discard such as Mycroft _. In C# 9, the identifier is no longer needed. Yes, Mycroft is a type in the app. Let’s keep to simple a little longer, before I show you the other switch expression. The following if statement demonstrates a logical pattern, preceded by two instances of a type pattern using is. if (user is Mycroft m && m.CaresAbout is not object) { Console.WriteLine("Mycroft dissapoints us again."); } The type isn’t known, so the user variable is tested for the Mycroft type and is then assigned to m if that test passes. A property on the Mycroft object is tested to be not an object. A test for null would have also worked, but wouldn’t have demonstrated a logical pattern. The other switch expression is a lot more expansive. public static bool IsAccessOkOfficial(Person user, Content content, int season) => (user, content, season) switch { // Tuple + property patterns ({Type: Child}, {Type: ChildsPlay}, _) => true, ({Type: Child}, _, _) => false, (_ , {Type: Public}, _) => true, ({Type: Monarch}, {Type: ForHerEyesOnly}, _) => true, // Tuple + type patterns (OpenCaseFile f, {Type: ChildsPlay}, 4) when f.Name == "Sherlock Holmes" => true, // Property and type patterns {Item1: OpenCaseFile {Type: var type}, Item2: {Name: var name}} when type == PoorlyDefined && name.Contains("Sherrinford") && season >= 3 => true, // Tuple and type patterns (OpenCaseFile, var c, 4) when c.Name.Contains("Sherrinford") => true, // Tuple, Type, Property and logical patterns (OpenCaseFile {RiskLevel: >50 and <100 }, {Type: StateSecret}, 3) => true, _ => false, }; The only really interesting pattern is the very last one (before the discard: -), which tests for a Risklevel that is >50 and <100. There are many times I’ve wanted to write an if statement with that form of logical pattern syntax without needing to repeat a variable name. This logical pattern could also have been written in the following way instead and would have more closely matched the syntax demonstrated in the C# 9 blog post. They are equivalent. (OpenCaseFile {RiskLevel: var riskLevel}, {Type: StateSecret}, 3) when riskLevel switch { >50 and <100 => true, _ => false } => true, I’m far from a language expert. Jared Parsons and Andy Gocke gave me a lot of help with this section of the post. Thanks! The key stumbling block I had was with a switch on a tuple. At times, the positional pattern is inconvenient, and you only want to address one part of the tuple. That’s where the property pattern comes in, as you can see in the following code. {Item1: OpenCaseFile {Type: var type}, Item2: {Name: var name}} when type == PoorlyDefined && name.Contains("Sherrinford") && season >= 3 => true, There is a fair bit going on there. The key point is that the tuple properties are being tested, as opposed to matching a tuple positionally. That approaches provides a lot more flexibility. You are free to intermix these approaches within a given switch expression. Hopefully that helps someone. It helped me. If you are curious about what the app does, I’ve saved the output of the program in the app gist. You can also run the app for yourself. I believe it requires .NET 5.0 RC2 to run. If there has been a pattern with the last three C# (major) versions, it has been patterns. I certainly hope the C# team matches that pattern going forward. I imagine it is the shape of things, and there are certainly more values to extract. ClickOnce ClickOnce has been a popular .NET deployment option for many years. It’s now supported for .NET Core 3.1 and .NET 5.0 Windows apps. We knew that many people would want to use ClickOnce for application deployment when we added Windows Forms and WPF support to .NET Core 3.0. In the past year, the .NET and Visual Studio teams worked together to enable ClickOnce publishing, both at the command line and in Visual Studio. We had two goals from the start of the project: Enable a familiar experience for ClickOnce in Visual Studio. Enable a modern CI/CD for ClickOnce publishing with command-line flows, with either MSBuild or the Mage tool. It’s easiest to show you the experience in pictures. Let’s start with the Visual Studio experience, which is centered around project publishing. You need to publish to a Folder target. The primary deployment model we’re currently supporting is framework dependent apps. It is easy to take a dependency on the .NET Desktop Runtime (that’s the one that contains WPF and Windows Forms). Your ClickOnce installer will install the .NET runtime on user machines if it is needed. We also intend to support self-contained and single file apps.   You might wonder if you can still be able to take advantage of ClickOnce offline and updating features. Yes, you can. The same install locations and manifest signing features are included. If you have strict signing requirements, you will be covered with this new experience. Now, let’s switch to the command line Mage experience. The big change with Mage is that it is now a .NET tool, distributed on NuGet. That means you don’t need to install anything special on your machine. You just need the .NET 5.0 SDK and then you can install Mage as a .NET tool. You can use it to publish .NET Framework apps as well, however, SHA1 signing and partial trust support have been removed. The Mage installation command follows: dotnet tool install -g Microsoft.DotNet.Mage The following commands configure and publish a sample application. The next command launches the ClickOnce application. And then the familiar ClickOnce installation dialog appears. After installing the application, the app will be launched. After re-building and re-publishing the application, users will see an update dialog. And from there, the updated app will be launched. Note: The name of the Mage .NET tool will change from mage.net to dotnet-mage for the final release. The NuGet package name will remain the same. This quick lap around ClickOnce publishing and installation should give you a good idea of how you might use ClickOnce. Our intention has been to enable a parity experience with the existing ClickOnce support for .NET Framework. If you find that we haven’t lived up to that goal, please tell us. ClickOnce browser integration is the same as with .NET Framework, supported in Edge and Internet Explorer. Please tell us how important it is to support the other browsers for your users. Windows Arm64 MSI installers are now available for Windows Arm64, as you can see in the following image of the .NET 5.0 SDK installer. To further prove the point, I ran the dotnet-runtimeinfo tool on my Arm64 machine to demonstrate the configuration. C:Usersrich>dotnet tool install -g dotnet-runtimeinfo You can invoke the tool using the following command: dotnet-runtimeinfo Tool 'dotnet-runtimeinfo' (version '1.0.2') was successfully installed. C:Usersrich>dotnet-runtimeinfo **.NET information Version: 5.0.0 FrameworkDescription: .NET 5.0.0-rc.2.20475.5 Libraries version: 5.0.0-rc.2.20475.5 Libraries hash: c5a3f49c88d3d907a56ec8d18f783426de5144e9 **Environment information OSDescription: Microsoft Windows 10.0.18362 OSVersion: Microsoft Windows NT 10.0.18362.0 OSArchitecture: Arm64 ProcessorCount: 8 The .NET 5.0 SDK does not currently contain the Windows Desktop components — Windows Forms and WPF — on Windows Arm64. This late change was initially shared in the .NET 5.0 Preview 8 post. We are hoping to add the Windows desktop pack for Windows Arm64 in a 5.0 servicing update. We don’t currently have a date to share. For now, the SDK, console and ASP.NET Core applications are supported on Windows Arm64. Closing We’re now so close to finishing off this release, and sending it out for broad production use. We believe it is ready. The production use that it is already getting at Microsoft brings us a lot of confidence. We’re looking forward to you getting the chance to really take advantage of .NET 5.0 in your own environment. It’s been a long time since we’ve shared our social media pages. If you are on social media, check out the dotnet pages we maintain: Twitter Facebook The post Announcing .NET 5.0 RC 2 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 26648

article-image-macos-catalina-is-now-available-for-download
Sugandha Lahoti
08 Oct 2019
3 min read
Save for later

macOS Catalina is now available for download

Sugandha Lahoti
08 Oct 2019
3 min read
Apple released macOS Catalina today as the next major update to the company’s Mac operating system. With macOS Catalina, iTunes is now broken into separate apps for Apple Music, Podcasts, and Apple TV. Catalina also features Apple Arcade game subscription service and Sidecar, which extends Mac desktops to a second display. For developers, Catalina has Mac Catalyst to build versions of iPad apps compatible with Mac. macOS Catalina was officially revealed in June at the WWDC 2019 and the public beta was released later in June. What’s new in macOS Catalina Sidecar Sidecar basically extends your Mac workspace by using an iPad as a second display-  both wirelessly and when plugged in. Sidecar also supports the Apple Pencil, letting you work on any Mac app or third-party Mac app that supports stylus input. According to an Apple white paper, the only laptops that Sidecar works on are: MacBooks from 2016 or later, MacBook Airs from 2018 or later, and MacBook Pros from 2016 or later. All of them use Apple’s butterfly keyboard. Addition of Apple Arcade Apple Arcade game subscription service is available at $4.99 per month to play games on Mac. Apple Arcade subscribers get the full version of every game including all updates and expansions, without any ads or additional in-game purchases. The service is launching with a 30-day free trial and a single subscription includes access for up to six family members with Family Sharing. iTunes replaced with new entertainment apps iTunes saw it’s long-awaited death and was replaced by three new apps, Apple Music, Apple Podcasts and Apple TV. Music app features over 50 million songs, playlists, and music videos. Apple Podcasts offers more than 700,000 shows in its catalog. Apple TV+, Apple’s video subscription service, will be available in the Apple TV app for Mac starting November 1 Removal of iTunes, however, is a problem for DJs who rely on XML files to sort through file libraries and quickly find tracks while performing. According to Apple, along with Catalina’s removal of iTunes, users are also losing XML file support as all native music playback on Macs moves over to the official Music app, which has a new library format. https://twitter.com/danideahl/status/1181342504949633025 Additional features You also have Screen Time on macOS and stricter privacy protections. Apps will have to ask for permission to access the desktop, documents, iCloud Drive, and external storage. With activation lock, any Macs that have a T2 security chip cannot be erased and reactivated without Apple ID password. ‘Find My App’ combines ‘Find My iPhone’ and ‘Find My Friends’ into a single, easy-to-use app on Mac, iPad, and iPhone. Mail in macOS Catalina adds the ability to block email from a specified sender, mute an overly active thread and unsubscribe from commercial mailing lists. The macOS Catalina update is a free download, and it can be installed by clicking on the Apple icon in the upper left corner of your screen, choosing system preferences, and then selecting software update. Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Apple iPadOS now available for download with Slide Over and Split View, Home Screen updates, and more. Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+ and more
Read more
  • 0
  • 0
  • 26229

article-image-openjdk-project-valhalla-is-now-in-phase-iii
Prasad Ramesh
10 Oct 2018
3 min read
Save for later

OpenJDK Project Valhalla is now in Phase III

Prasad Ramesh
10 Oct 2018
3 min read
Project Valhalla is an OpenJDK project started in 2014 in an experimental stage. It is headed by Oracle Java language architect Brian Goetz and supported by the HotSpot group. The project was created for introducing value-based optimizations to JDK 10 and above. The goal of Project Valhalla is explore and support development of advanced Java VM and language features like, value types, generic specialization, and variable handles. The Project Valhalla members met last week at Burlington MA to discuss in detail about the current project status and future plans. Goetz notes that it was a very productive meeting with members either attending the venue in person or connecting via calls. After over four years of the project, the members decided to meet as it seemed like a good time to assess the project. Goetz states: “And, like all worthwhile projects, we hadn't realized quite how much we had bitten off.  (This is good, because if we had, we'd probably have given up.)” This meeting indicates the initiation of Phase III project Valhalla. Phase I focused on language and libraries. Trying to figure out what exactly a clean migration to value types and specialized generics would look like. This included steps to migrate core APIs like Collections and Streams, and understanding the limitations of the current VM. This enabled a vision for the VM that was needed. Phase I produced three prototypes, Models 1-3. The exploration areas of these models included specialization mechanics (M1), handling of wildcards (M2) and classfile representations for specialization and erasure (M3). At this point, the list of VM requirements became too long and they had to take a different approach. Phase II took on the problem from the VM up, with two additional rounds of prototypes namely MVT and LW1. LW1 was a risky experiment; sharing the L-carrier and a* bytecodes between references and values while not losing performance. If this could be achieved, many of the problems from Phase I could go away.  This was successful and now they have a richer base for further work. The next target is L2, which will capture the choices made so far, provide a useful testbed for doing library experiments, and set the stage for tackle remaining open questions between now and L10.  L10 is the target for a first preview, which eventually should support value types and erased generics over values. For more information, you can read the mail on Project Valhalla mailing list. JDK 12 is all set for public release in March 2019 State of OpenJDK: Past, Present and Future with Oracle No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 25960
article-image-mlops-devops-for-machine-learning-from-net-blog
Matthew Emerick
13 Oct 2020
1 min read
Save for later

MLOps: DevOps for Machine Learning from .NET Blog

Matthew Emerick
13 Oct 2020
1 min read
Machine Learning Operations (MLOps) is like DevOps for the machine learning lifecycle. This includes things like model deployment & management and data tracking, which help with productionizing machine learning models. Through the survey below, we’d love to get feedback on your current DevOps practices as well as your prospective usage of MLOps in .NET. We’ll use your feedback to drive the direction of MLOps support in .NET. Take the survey The post MLOps: DevOps for Machine Learning appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 25940

article-image-opencv-4-schedule-july-release
Pavan Ramchandani
10 Apr 2018
3 min read
Save for later

OpenCV 4.0 is on schedule for July release

Pavan Ramchandani
10 Apr 2018
3 min read
There has been some exciting news from OpenCV: OpenCV developer Vadim Pisarevsky announced the development on OpenCV 4 on the GitHub repository of OpenCV and addressed why the time is right for the release of OpenCV 4. OpenCV 3 was released in 2015 taking 6 years to come out after OpenCV 2 which was released in 2009. OpenCV 3 has been built around C++ 98 standards. Re-writing the library in the recent version of C++ like C++ 11 or later versions would mean to break the "binary compatibility". This makes it important to move further from the OpenCV 3 promises. There are two interesting concepts that we need to know here - Binary compatibility and source-level compatibility. OpenCV had made a promise to stay binary-compatible with versions, that means the release of new OpenCV versions will stay compatible with the previous version library calls. Now moving from C++ 98 standard to recent C++ standard will break this promise. However, OpenCV has looked into this and found that not much harm will be caused by this migration, hence relaxing the "binary compatibility" and moving to "source compatibility" with the new release. Apart from migrating to latest C++ standards, the OpenCV library needs refactoring and new module additions for Deep learning and neural networks seeing the heavy usage of OpenCV in machine learning. OpenCV developers can expect some big revisions in functions and modules. Here is a quick summary of what you might expect in this major release of OpenCV 4.0: Hardware-accelerated Video I/O module: This module maximizes OpenCV performance using software and hardware accelerator in the machine. This means calling this module with OpenCV 4 will harness the acceleration. HighGUI module (Revised): With the enhancement of this module, you can efficiently read video from camera or files and also perform a write operation on them. This module comes with a lot of functionality for media IO operation. Graph API module: This module creates support for efficiently reading and writing graphs from the image. Point Cloud module: Point cloud module contains algorithms such as feature estimation, model fitting, and segmentation. These algorithms can be used for filtering noisy data, stitch 3D point clouds, segment part of the image, among others. Tracking, Calibration, and Stereo Modules, among other features that will benefit image processing with OpenCV. You can find the full list of a new module that might get added in OpenCV 4 in the issues page of OpenCV repo. The OpenCV community is relying on its huge developer community to facilitate closing the open issues within the speculated time of release, that is July 2018. Functionalities that don’t make it OpenCV 4 release, will be rolled into the OpenCV 4.x releases. While you wait for OpenCV 4, enjoy these OpenCV 3 tutorials: New functionality in OpenCV 3.0 Fingerprint detection using OpenCV 3 OpenCV Primer: What can you do with Computer Vision and how to get started? Image filtering techniques in OpenCV Building a classification system with logistic regression in OpenCV Exploring Structure from Motion Using OpenCV
Read more
  • 0
  • 0
  • 25643

article-image-what-to-expect-in-asp-net-core-3-0
Prasad Ramesh
30 Oct 2018
2 min read
Save for later

What to expect in ASP.NET Core 3.0

Prasad Ramesh
30 Oct 2018
2 min read
ASP.NET Core 3.0 will come with some changes in the way projects work with frameworks. The .NET Core integration will be tighter and will bring third-party open source integration. Changes to shared frameworks in ASP.NET Core 3.0 In ASP.NET Core 1.0, packages were referenced as just packages. From ASP.NET Core 2.1 this was available as a .NET Core shared framework. ASP.NET Core 3.0 aims to reduce issues working with a shared framework. This change removes some of the Json.NET (Newtonsoft.Json) and Entity Framework Core (Microsoft.EntityFrameworkCore.*) components from the shared framework ASP.NET Core 3.0. For areas in ASP.NET Core dependent on Json.NET, there will be packages that support the integration. The default areas will be updated to use in-box JSON APIs. Also, Entity Framework Core will be shipped as “pure” NuGet packages. Shift to .NET Core from .NET Framework The .NET Framework will get fewer new features that come to .NET Core in further releases. This change is made so that existing applications in .NET Core don’t break due to some changes. To leverage the features from .NET Core, ASP.NET Core will now only run on .NET Core starting from version 3.0. Developers currently using ASP.NET Core on .NET Framework can continue to do so till the LTS support period of August 21, 2021. Third party components will be filtered Third party components will be removed. But Microsoft will support the open source community with integration APIs, contributions to existing libraries by Microsoft engineers, and project templates to ensure smooth integration of these components. Work is also being done on streamlining the experience for building HTTP APIs, and a new API client generation system. For more details, visit the Microsoft website. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 25522
article-image-textmate-2-0-the-text-editor-for-macos-releases
Amrata Joshi
16 Sep 2019
3 min read
Save for later

TextMate 2.0, the text editor for macOS releases

Amrata Joshi
16 Sep 2019
3 min read
Yesterday, the team behind TextMate released TextMate 2.0. They announced that the code for TextMate 2.0 is available via the GitHub repository. In 2012, the team had open-sourced the alpha version of TextMate 2.0.  One of the reasons why the company open-sourced the code for TextMate 2.0 was to indicate that Apple isn’t limiting user and developer freedom on the Mac platform. In this release, the qualifier suffix in the version string has been deprecated and even the 32 bit APIs have been replaced. This release comes with improved accessibility support. What’s new in TextMate 2.0? Makes swapping easy This release allows users to easily swap pieces of code. Makes search results convenient TextMate presents the results of the search in a way that users can switch between matches, extract matched text and preview desired replacements. Version control  Users can see changes in the file browser view and they can check the changes made to lines of code in the editor view. Improved commands  TextMate features WebKit as well as a dialog framework for Mac-native or HTML-based interfaces. Converting code pieces into snippets  Users can now turn commonly used pieces of text or code into snippets with transformations, placeholders, and more. Bundles Users can use bundles for customization and a number of different languages, workflows, markup systems, and more.  Macros  TextMate features Marcos that eliminates repetitive work.  This project was supposed to release years ago and now it has finally released that makes a lot of users happy.  A user commented on GitHub, “Thank you @sorbits. For making TextMate in the first place all those years ago. And thank you to everyone who has and continues to contribute to the ongoing development of TextMate as an open source project. ~13 years later and this is still the only text editor I use… all day every day.” Another user commented, “Immense thanks to all those involved over the years!” A user commented on HackerNews, “I have a lot of respect for Allan Odgaard. Something happened, and I don't want to speculate, that caused him to take a break from Textmate (version 2.0 was supposed to come out 9 or so years ago). Instead of abandoning the project he open sourced it and almost a decade later it is being released. Textmate is now my graphical Notepad on Mac, with VS Code being my IDE and vim my text editor. Thanks Allan.” It is still not clear as to what took TextMate 2.0 this long to get released. According to a few users on HackerNews, Allan Odgaard, the creator of TextMate wanted to improve the designs in TextMate 1 and he realised that it would require a lot of work to do the same. So he had to rewrite everything that might have taken away his time. Another comment reads, “As Allan was getting less feedback about the code he was working on, and less interaction overall from users, he became less motivated. As the TextMate 2 project dragged past its original timeline, both Allan and others in the community started to get discouraged. I would speculate he started to feel like more of the work was a chore rather than a joyful adventure.” To know more about this news, check out the release notes. Other interesting news in Programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! GitHub Package Registry gets proxy support for the npm registry  
Read more
  • 0
  • 0
  • 25252

article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 24899
Modal Close icon
Modal Close icon