Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - ChatGPT

113 Articles
article-image-chatgpt-for-seo-and-sentiment-analysis
Valentina Alto
28 Sep 2023
12 min read
Save for later

ChatGPT for SEO and Sentiment Analysis

Valentina Alto
28 Sep 2023
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. Master core data architecture design concepts and Azure Data & AI services to gain a cloud data and AI architect’s perspective to developing end-to-end solutions.IntroductionIn the ever-evolving landscape of digital marketing, the emergence of AI-powered tools has redefined the way businesses engage with their audience. At the forefront of this transformation is ChatGPT, a versatile language model that is proving to be a game changer in two critical domains: Search Engine Optimization (SEO) and Sentiment Analysis. In this article, we embark on a journey to explore how ChatGPT is revolutionizing SEO strategies, enabling businesses to soar in search rankings, and how it wields its prowess in sentiment analysis to decipher customer feedback and enhance product quality.Boosting Search Engine Optimization (SEO)Another promising area for ChatGPT to be a game changer is Search Engine Optimization (SEO). This is the key element behind ranking in search engines such as Google or Bing and it determines whether your websites will be visible to users who are looking for what you promote.DefinitionSEO is a technique used to enhance the visibility and ranking of a website on search engine results pages (SERPs). It is done by optimizing the website or web page to increase the amount and quality of organic (unpaid) traffic from search engines. The purpose of SEO is to attract more targeted visitors to the website by optimizing it for specific keywords or phrases.Imagine you run an e-commerce company called Hat&Gloves, which only sells, as you might have guessed, hats and gloves. You are now creating your e-commerce website and want to optimize its ranking. Let’s ask ChatGPT to list some relevant keywords to embed in our website:Figure 7.18 – Example of SEO keywords generated by ChatGPTAs you can see, ChatGPT was able to create a list of keywords of different kinds. Some of them are pretty intuitive, such as Hats and Gloves. Others are related, with an indirect link. For example, Gift ideas are not necessarily related to my e-commerce business, however, it could be very smart to include it in my keywords, so that I can widen my audience.Another key element of SEO is search engine intent. Search engine intent, also known as user intent, refers to the underlying purpose or goal of a specific search query made by a user in a search engine. Understanding search engine intent is important because it helps businesses and marketers create more targeted and effective content and marketing strategies that align with the searcher’s needs and expectations.There are generally four types of search engine intent:Informational intent: The user is looking for information on a particular topic or question, such as What is the capital of France? or How to make a pizza at home.Navigational intent: The user is looking for a specific website or web page, such as Facebook login or Amazon.com. Commercial intent: The user is looking to buy a product or service, but may not have made a final decision yet. Examples of commercial intent searches include best laptop under $1000 or discount shoes online.Transactional intent: The user has a specific goal to complete a transaction, which might refer to physical purchases or subscribing to services. Examples of transactional intent could be buy iPhone 13 or sign up for a gym membership.By understanding the intent behind specific search queries, businesses, and marketers can create more targeted and effective content that meets the needs and expectations of their target audience. This can lead to higher search engine rankings, more traffic, and ultimately, more conversions and revenue.Now, the question is, will ChatGPT be able to determine the intent of a given request? Before answering, it is worth noticing that the activity of inferring the intent of a given prompt is the core business of Large Language Models (LLMs), including GPT. So, for sure, ChatGPT is able to capture prompts’ intents.The added value here is that we want to see whether ChatGPT is able to determine the intent in a precise domain with a precise taxonomy, that is, the one of marketing. That is the reason why prompt design is once again pivotal in guiding ChatGPT in the right direction.                                                                      Figure 7.19 – Example of keywords clustered by user intent by ChatGPTFinally, we could also go further and leverage once more the Act as… hack, which we already mentioned in Chapter 4. It would be very interesting indeed to understand how to optimize our website so that it reaches as many users as possible. In marketing, this analysis is called an SEO audit. An SEO audit is an evaluation of a website’s SEO performance and potential areas for improvement. An SEO audit is typically conducted by SEO experts, web developers, or marketers, and involves a comprehensive analysis of a website’s technical infrastructure, content, and backlink profile.During an SEO audit, the auditor will typically use a range of tools and techniques to identify areas of improvement, such as keyword analysis, website speed analysis, website architecture analysis, and content analysis. The auditor will then generate a report outlining the key issues, opportunities for improvement, and recommended actions to address them.Let’s ask ChatGPT to act as an SEO expert and instruct us on what an SEO audit report should look like and which metrics and KPIs should include:We can also ask you to give us an example of one of ChatGPT’s suggestions as follows:Figure 7.20 – Example of ChatGPT acting as an SEO expertChatGPT was able to generate a pretty accurate analysis, with relevant comments and suggestions. Overall, ChatGPT has interesting potential for SEO-related activities, and it can be a good tool whether you are building your website from scratch or you want to improve existing ones.Sentiment analysis to improve quality and increase customer satisfactionSentiment analysis is a technique used in marketing to analyze and interpret the emotions and opinions expressed by customers toward a brand, product, or service. It involves the use of natural language processing (NLP) and machine learning (ML) algorithms to identify and classify the sentiment of textual data such as social media posts, customer reviews, and feedback surveys.By performing sentiment analysis, marketers can gain insights into customer perceptions of their brand, identify areas for improvement, and make data-driven decisions to optimize their marketing strategies. For example, they can track the sentiment of customer reviews to identify which products or services are receiving positive or negative feedback and adjust their marketing messaging accordingly.Overall, sentiment analysis is a valuable tool for marketers to understand customer sentiment, gauge customer satisfaction, and develop effective marketing campaigns that resonate with their target audience.Sentiment analysis has been around for a while, so you might be wondering what ChatGPT could bring as added value. Well, besides the accuracy of the analysis (it being the most powerful model on the market right now), ChatGPT differentiates itself from other sentiment analysis tools since it is artificial general intelligence (AGI).This means that when we use ChatGPT for sentiment analysis, we are not using one of its specific APIs for that task: the core idea behind ChatGPT and OpenAI models is that they can assist the user in many general tasks at once, interacting with a task and changing the scope of the analysis according to the user’s request.So, for sure, ChatGPT is able to capture the sentiment of a given text, such as a Twitter post or a product review. However, ChatGPT can also go further and assist in identifying specific aspects of a product or brand that are positively or negatively impacting the sentiment. For example, if customers consistently mention a particular feature of a product in a negative way, ChatGPT can highlight that feature as an area for improvement. Or, ChatGPT might be asked to generate a response to a particularly delicate review, keeping in mind the sentiment of the review and using it as context for the response. Again, it can generate reports that summarize all the negative and positive elements found in reviews or comments and cluster them into categories.Let’s consider the following example. A customer has recently purchased a pair of shoes from my e-commerce company, RunFast, and left the following review:I recently purchased the RunFast Prodigy shoes and have mixed feelings about them. On one hand, the shoes are incredibly comfortable and provide great support for my feet during my daily runs. The cushioning is top-notch and my feet feel less fatigued after my runs than with my previous shoes. Additionally, the design is visually appealing and I received several compliments on them.However, on the other hand, I’ve experienced some durability issues with these shoes. The outsole seems to wear down rather quickly and the upper material, while breathable, is showing signs of wear after only a few weeks of use. This is disappointing, considering the high price point of the shoes.Overall, while I love the comfort and design of the RunFast Prodigy shoes, I’m hesitant to recommend them due to the durability issues I’ve experienced.Let’s ask ChatGPT to capture the sentiment of this review:Figure 7.21 – ChatGPT analyzing a customer reviewFrom the preceding figure, we can see how ChatGPT didn’t limit itself to providing a label: it also explained both the positive and negative elements characterizing the review, which has a mixed feeling and hence can be labeled as neutral overall.Let’s try to go deeper into that and ask some suggestions about improving the product:Figure 7.22 – Suggestions on how to improve my product based on customer feedbackFinally, let’s generate a response to the customer, showing that we, as a company, do care about customers’ feedback and want to improve our products.Figure 7.23 – Response generated by ChatGPTThe example we saw was a very simple one with just one review. Now imagine we have tons of reviews, as well as diverse sales channels where we receive feedback. Imagine the power of tools such as ChatGPT and OpenAI models, which are able to analyze and integrate all of that information and identify the pluses and minuses of your products, as well as capturing customer trends and shopping habits. Additionally, for customer care and retention, we could also automate review responses using the writing style we prefer. In fact, by tailoring your chatbot’s language and tone to meet the specific needs and expectations of your customers, you can create a more engaging and effective customer experience.Here are some examples:Empathetic chatbot: A chatbot that uses an empathetic tone and language to interact with customers who may be experiencing a problem or need help with a sensitive issueProfessional chatbot: A chatbot that uses a professional tone and language to interact with customers who may be looking for specific information or need help with a technical issueConversational chatbot: A chatbot that uses a casual and friendly tone to interact with customers who may be looking for a personalized experience or have a more general inquiryHumorous chatbot: A chatbot that uses humor and witty language to interact with customers who may be looking for a light-hearted experience or to diffuse a tense situationEducational chatbot: A chatbot that uses a teaching style of communication to interact with customers who may be looking to learn more about a product or serviceIn conclusion, ChatGPT can be a powerful tool for businesses to conduct sentiment analysis, improve their quality, and retain their customers. With its advanced natural language processing capabilities, ChatGPT can accurately analyze customer feedback and reviews in real-time, providing businesses with valuable insights into customer sentiment and preferences. By using ChatGPT as part of their customer experience strategy, businesses can quickly identify any issues that may be negatively impacting customer satisfaction and take corrective action. Not only can this help businesses improve their quality but it can also increase customer loyalty and retention.ConclusionIn this article, we learned to enhance SEO analysis, and capture the sentiment of reviews, social media posts, and other customer feedback.As ChatGPT continues to advance and evolve, it is likely that we will see even more involvement in the marketing industry, especially in the way companies engage with their customers. In fact, relying heavily on AI allows companies to gain deeper insights into customer behavior and preferences.The key takeaway for marketers is to embrace these changes and adapt to the new reality of AI-powered marketing in order to stay ahead of the competition and meet the needs of their customers.Author BioValentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industry. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.
Read more
  • 0
  • 0
  • 13601

article-image-chatgpt-for-interview-preparation
Jakov Semenski
14 Sep 2023
6 min read
Save for later

ChatGPT for Interview Preparation

Jakov Semenski
14 Sep 2023
6 min read
IntroductionDespite reading numerous forums and blogs, you feel unprepared for the technical questions that interviewers might throw at you. Isn't there a more efficient way?The countdown during a timed coding test.The watching eyes of an interviewer - make you blank out.Leading to regrets about not practicing in a more realistic setting.What if there was a tool available 24/7, that can not only answer your questions but also engage with you, much like an interviewer would?Enter ChatGPT, your personalized interview coach.Using ChatGPT for interview preparation is like having a personal trainer at the gym.While you can definitely exercise on your own, the trainer (or in this case, ChatGPT) guides you:It corrects your mistakes,Provides targeted exercises (solutions)Improve your weak areas.Let’s put ChatGPT to the test with a hands-on exercise:Setting up a mock interview for Senior Java RoleDoint theoretical and coding questionsGetting feedback for our answersBonus: We will also check if it can make my learning entertaining.Here are Little-Known Ways to Ace Your Dev Interview with ChatGPTStep 1: Set your levelChatGPT doesn't know your level, so we need to set proper expectationsStep 2: Setup rehears mock interviewNow, we need to let ChatGPT know that we are not interested in answers, but rather feedback.Here is the full prompt:I want to rehear for the senior java interview Currently, I am junior and have experience only with basics and helloworld examples don't show me the answer yet once I answer, check if I was correctHere is the result:ChatGPTOf course! Let's simulate a senior Java interview scenario. I'll provide questions that are often asked for a senior Java position. Try to answer them to the best of your ability, and then I'll provide feedback.Question 1: Explain the difference between ArrayList and LinkedList in Java.Go ahead and provide your answer when you're ready.UserI think arraylist uses dynamic array while linkedlists stores data in different objects and acts as a queueChatGPTYou've touched on the basics, but let's dive a bit deeper.ArrayList:Internal Structure: As you correctly mentioned, ArrayList uses a dynamic array to store its elements. This means its capacity grows automatically when we add more elements than its initial capacity.Access Time: Because of this structure, accessing an element by its index is fast, i.e., O(1).Insertion/Deletion Time: However, inserting or deleting an element from anywhere except the end of the list takes O(n) time on average, as it may require shifting elements.Memory Overhead: It's relatively low, as you're mostly dealing with the raw array and a few integer counters.LinkedList:Internal Structure: LinkedList uses a doubly-linked list. Each element (node) has a data part and two pointers, one pointing to the next element and the other to the previous element.Access Time: Getting to an element by index isn't as fast, since it requires walking through the list. It's O(n) in the worst case.Insertion/Deletion Time: However, once you have a reference to a node (say via an iterator), adding or removing is O(1), provided you don’t need to search for the node.Memory Overhead: Higher than ArrayList because of the two pointers with each data item.Now, one more nuanced point: ArrayList performs better in scenarios where you are frequently accessing elements via their indices and have fewer insertions/deletions. LinkedList is better suited for operations that involve frequent insertions or deletions from the beginning or middle of the list.You had a good start! Keep refining your understanding, and you'll be on your way to senior-level knowledge. Would you like to proceed with the next question?Step 3: Now, the coding testWe can now instruct ChatGPT to give us coding questiongive me one more question, this time coding questionChatGPTOf course! Here's a simpler coding problem:Question: Implement a method that takes an array of integers and returns the sum of all even numbers in the array.Method Signature:public int sumOfEvens(int[] numbers) { // your code here }Please go ahead and implement this, and I'll review your solution afterward!Userint sum = 0; for (int number : numbers) { if (number % 2 == 0) { sum += number; } } return sum;ChatGPTThat's correct! Your solution is concise and to the point.Here's the complete function for clarity: ….Step 4: Make it funTo spice up the interview a bit, let’s make chatGPT talk in a funny way so we can get some laughs along the wayPrompt:write every sentence in a speaking style of a fast-talking wise guy from 1940's gangster moviesChatGPTAlright, alright! I'll take it down a notch for you. No need to get all flustered.Question 3: Look here, champ, you ever heard of this thing called "Encapsulation"? It's one of those big fancy terms they throw around. Can you give me the lowdown on what it means in Java? Just the basics, ya know?Alright, give it your best shot! And remember, keep your chin up, kid!UserIt's basically means binding variables and methods together into a single unit and preventing them from being accessed by other classesChatGPTHey, not too shabby, kid! You've got the right idea. Encapsulation, in Java and many other object-oriented languages, is about keeping the internal states or data of an object private and only allowing modification through public methods. It's a way to protect the integrity of the data and control how it's accessed or modified. It's like having a vault, see? You don’t just let anyone waltz in and take whatever they want. You got guards, and specific ways to access what's inside.Good job, kiddo! Anything else you wanna test your chops on, or are we square?You can find the full chat on this link:https://chat.openai.com/share/b9f8af45-1187-4fa9-a884-4565f37d2884LimitationsKeep in mind that ChatGPT is not always correct, even though it can sound convincing.If you are unsure, always double-check your solution with a compiler for coding answers.ConclusionChatGPT is your ticket to mastering technical interviews.Using ChatGPT ensures you get direct, concise, and often up-to-date explanations on specific problems, making your interview preparation more efficient.Engage, learn, and get instant feedback to always stay a step ahead.As the next step, try it yourself and practice.Remember:Inform ChatGPT about your current skill level.If you are not happy with the questions or topic, give ChatGPT more context.Lastly, make it more fun.Author BioJakov Semenski is an IT Architect working at IBMiX with almost 20 years of experience.He is also a ChatGPT Speaker at the WeAreDevelopers conference and shares valuable tech stories on LinkedIn.
Read more
  • 0
  • 0
  • 13412

article-image-creating-vulnerability-assessment-plans-with-chatgpt
Clint Bodungen
26 Dec 2023
9 min read
Save for later

Creating Vulnerability Assessment Plans with ChatGPT

Clint Bodungen
26 Dec 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, ChatGPT for Cybersecurity Cookbook, by Clint Bodungen. Master ChatGPT and the OpenAI API, and harness the power of cutting-edge generative AI and large language models to revolutionize the way you perform penetration testing, threat detection, and risk assessment.IntroductionIn this recipe, you'll learn how to harness the power of ChatGPT and the OpenAI API to create comprehensive vulnerability assessment plans using network, system, and business details as input. This recipe is invaluable for both cybersecurity students and beginners looking to familiarize themselves with proper methods and tools for vulnerability assessments, as well as experienced cybersecurity professionals aiming to save time on planning and documentation. Building upon the skills acquired in Chapter 1, you will delve deeper into establishing the system role of a cybersecurity professional specializing in vulnerability assessments. You'll learn how to craft effective prompts that generate well-formatted output using markdown language. This recipe will also expand on the techniques explored in Enhancing Output with Templates (Application: Threat Report) and Formatting Output as a Table (Application: Security Controls Table) in Chapter 1, enabling you to design prompts that produce the desired output format. Finally, you'll discover how to use the OpenAI API and Python to generate the vulnerability assessment plan, and then export it as a Microsoft Word file. This recipe will serve as a practical guide for creating detailed and efficient vulnerability assessment plans using ChatGPT and the OpenAI API. Getting Ready Before diving into the recipe, you should already have your OpenAI account setup and have obtained your API key. If not, revisit Chapter 1 for details. You will also need to be sure you have the following Python libraries installed: 1. python-docx: This library will be used to generate Microsoft Word files. You can install it using the command pip install python-docx.  2. tqdm: This library will be used to display progress bars. You can install it using the command:pip install tqdmHow to do it… In this section, we will walk you through the process of using ChatGPT to create a comprehensive vulnerability assessment plan tailored to a specific network and organization's needs. By providing the necessary details and using the given system role and prompt, you will be able to generate a well-structured assessment plan. 1. Begin by logging in to your ChatGPT account and navigating to the ChatGPT web UI. 2. Start a new conversation with ChatGPT by clicking the "New chat" button. 3.  Enter the following prompt to establish a system role: You are a cybersecurity professional specializing in vulnerability assessment. 4. Enter the following message text, but replace the placeholders in the “{ }” brackets with the appropriate data of your choice. You can either combine this prompt with the system role or enter it separately as follows: Using cybersecurity industry standards and best practices, create a complete and detailed assessment plan (not a penetration test) that includes: Introduction, outline of the process/methodology, tools needed, and a very detailed multi-layered outline of the steps. Provide a thorough and descriptive introduction and as much detail and description as possible throughout the plan. The plan should not only assessment of technical vulnerabilities on systems but also policies, procedures, and compliance. It should include the use of scanning tools as well as configuration review, staff interviews, and site walk-around. All recommendations should following industry standard best practices and methods. The plan should be a minimum of 1500 words. Create the plan so that it is specific for the following details: Network Size: {Large} Number of Nodes: {1000} Type of Devices: {Desktops, Laptops, Printers, Routers} Specific systems or devices that need to be excluded from the assessment: {None} Operating Systems: {Windows 10, MacOS, Linux} Network Topology: {Star} Access Controls: {Role-based access control} Previous Security Incidents: {3 incidents in the last year} Compliance Requirements: {HIPAA} Business Critical Assets: {Financial data, Personal health information} Data Classification: {Highly confidential} Goals and objectives of the vulnerability assessment: {To identify and prioritize potential vulnerabilities in the network and provide recommendations for remediation and risk mitigation.} Timeline for the vulnerability assessment: {4 weeks{ Team: {3 cybersecurity professionals, including a vulnerability assessment lead and two security analysts} Expected deliverables of the assessment: {A detailed report outlining the results of the vulnerability assessment, including identified vulnerabilities, their criticality, potential impact on the network, and recommendations for remediation and risk mitigation.} Audience: {The organization's IT department, senior management, and any external auditors or regulators.} Provide the plan using the following format and markdown language: #Vulnerability Assessment Plan ##Introduction Thorough Introduction to the plan including the scope, reasons for doing it, goals and objectives, and summary of the plan ##Process/Methodology Description and Outline of the process/Methodology ##Tools Required List of required tools and applications, with their descriptions and reasons needed ##Assessment Steps Detailed, multi-layered outline of the assessment steps Hint If you are performing this in the OpenAI Playground, it is advisable to use Chat mode and enter the role in the System window, and the prompt in the User message window. Figure 2.1 shows the system role and user prompt entered into the OpenAI Playground.  Figure 2.1 – OpenAI Playground Method 5. Review the generated output from ChatGPT. If the output is satisfactory and meets the requirements, you can proceed to the next step. If not, you can either refine your prompt or re-run the conversation to generate a new output. 6. Once you have obtained the desired output, you can use the generated markdown to create a well-structured vulnerability assessment plan in your preferred text editor or markdown viewer.  Figure 2.2 shows an example of ChatGPT generation of a vulnerability assessment plan using markdown language formatting.  Figure 2.2 – Example ChatGPT Assessment Plan Output How it works… This GPT-assisted vulnerability assessment plan recipe leverages the sophistication of natural language processing (NLP) and machine learning algorithms to generate a comprehensive and detailed vulnerability assessment plan. By adopting a specific system role and an elaborate user request as a prompt, ChatGPT is able to customize its response to meet the requirements of a seasoned cybersecurity professional who is tasked with assessing an extensive network system. Here's a closer look at how this process works: 1. System Role and Detailed Prompt: The system role designates ChatGPT as a seasoned cybersecurity professional specializing in vulnerability assessment. The prompt, which serves as the user request, is detailed and outlines the specifics of the assessment plan, from the size of the network and types of devices to the required compliance and the expected deliverables. These inputs provide context and guide ChatGPT's response, ensuring it is tailored to the complexities and requirements of the vulnerability assessment task. 2. Natural Language Processing and Machine Learning: NLP and machine learning form the bedrock of ChatGPT's capabilities. It applies these technologies to understand the intricacies of the user request, learn from the patterns, and generate a well-structured vulnerability assessment plan that is detailed, specific, and actionable. 3. Knowledge and Language Understanding Capabilities: ChatGPT uses its extensive knowledge base and language understanding capabilities to conform to industry-standard methodologies and best practices. This is particularly important in the rapidly evolving field of cybersecurity, ensuring that the resulting vulnerability assessment plan is up-to-date and adheres to recognized standards. 4. Markdown Language Output: The use of markdown language output ensures that the plan is formatted in a consistent and easy-to-read manner. This format can be easily integrated into reports, presentations, and other formal documents, which is crucial when communicating the plan to IT departments, senior management, and external auditors or regulators. 5. Streamlining the Assessment Planning Process: The overall advantage of using this GPT-assisted vulnerability assessment plan recipe is that it streamlines the process of creating a comprehensive vulnerability assessment plan. You save time on planning and documentation and can generate a professional-grade assessment plan that aligns with industry standards and is tailored to the specific needs of your organization. By applying these detailed inputs, you transform ChatGPT into a potential tool that can assist in creating a comprehensive, tailored vulnerability assessment plan. This not only bolsters your cybersecurity efforts but also ensures your resources are utilized effectively in protecting your network systems. ConclusionIn harnessing ChatGPT and the OpenAI API, this guide unlocks a streamlined approach to crafting detailed vulnerability assessment plans. Whether a novice or seasoned cybersecurity professional, leveraging these tools optimizes planning and documentation. By tailoring assessments to specific network intricacies, it fosters precision in identifying potential threats and fortifying defenses. Embrace this method to not only save time but also ensure comprehensive security measures aligned with industry standards, safeguarding networks effectively.Author BioClint Bodungen is a cybersecurity professional with 25+ years of experience and the author of Hacking Exposed: Industrial Control Systems. He began his career in the United States Air Force and has since many of the world's largest energy companies and organizations, working for notable cybersecurity companies such as Symantec, Kaspersky Lab, and Booz Allen Hamilton. He has published multiple articles, technical papers, and training courses on cybersecurity and aims to revolutionize cybersecurity education using computer gaming (“gamification”) and AI technology. His flagship product, ThreatGEN® Red vs. Blue, is the world’s first online multiplayer cybersecurity simulation game, designed to teach real-world cybersecurity.
Read more
  • 0
  • 0
  • 13253

article-image-chatgpt-for-everyday-productivity
Valentina Alto
20 Sep 2023
10 min read
Save for later

ChatGPT for Everyday Productivity

Valentina Alto
20 Sep 2023
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. Master core data architecture design concepts and Azure Data & AI services to gain a cloud data and AI architect’s perspective to developing end-to-end solutions.IntroductionIn this article guide, we will cover the main activities ChatGPT can perform for general users daily to boost their productivity. The chapter will focus on concrete examples of writing assistance, decision-making, information retrieval, and so on, with suggestions and prompts so that you can implement them on your own.By the end of this article, you will have learned how to use ChatGPT as a booster for the following activities: Daily activities such as organizing agendas, meal-prepping, grocery shopping, and so onGenerating brand-new text contentImproving your writing skills and adapting the same content to different audiencesRetrieving documentation and information for research and competitive intelligenceTechnical requirementsFor this article, you will require a ChatGPT account. You can create one for free.ChatGPT as a daily assistantChatGPT can serve as a valuable daily assistant, helping you manage your tasks and streamline your workflow. It can optimize your daily routine by providing personalized assistance, thus saving you time and enhancing your efficiency.Let’s start with a general suggestion on how I could make my day more productive:Figure – An example of ChatGPT generating a productive routineThe preceding routine is pretty generic since it doesn’t focus on a specific list of activities (except studying). Let’s challenge ChatGPT to create a more task-oriented routine, including a list of activities we have to carry on during the day:Figure – Optimized routine generated by ChatGPT on top of my activitiesAs you can see, here, ChatGPT acted as a to-do list maker, managing time for me and helping me in organizing my busy day.Now, let’s look at some more examples of using ChatGPT for more specific tasks, including text generation, writing assistance, and information retrieval.Generating textAs a language model, ChatGPT is particularly suited for generating text based on users’ instructions. For example, you could ask ChatGPT to generate emails, drafts, or templates that target a specific audience:Figure – Example of an email generated by ChatGPTAnother example might be asking ChatGPT to create a pitch structure for a presentation you have to prepare:Figure – Slideshow agenda and structure generated by ChatGPTImproving writing skills and translationSometimes, rather than generating new content, you might want to revisit an existing piece of text. It this be for style improvement purposes, audience changes, language translation, and so on.Let’s look at some examples. Imagine that I drafted an email to invite a customer of mine to a webinar. I wrote two short sentences. Here, I want ChatGPT to improve the form and style of this email since the target audience will be executive-level:Figure – Example of an email revisited by ChatGPT to target an executive audienceNow, let’s ask the same thing but with a different target audience:Figure – Example of the same email with a different audience, generated by ChatGPTChatGPT can also give you some feedback about your writing style and structure.Imagine, for example, that you wrote a script with scenography for your YouTube channel. You included the speech as well as images, clips, and video editing activities. You also know that your typical audience is between 15 and 25 years old. You want feedback on your script and ask for this from ChatGPT:                                                                          Figure – Example of ChatGPT providing feedback on a video scriptAs you can see, not only was ChatGPT able to give me feedback about the writing style, but also it suggested how I could improve the scenography of the whole video, by including more visuals.I was impressed by this last one. ChatGPT was smart enough to see that there was no specific mention of the history of NLP in my introduction. Nevertheless, it sets up the expectation about that topic to be treated later on. This means that ChatGPT also has expertise in terms of how an essay should be structured and it was very precise in applying its judgment, knowing that it was just an introduction.It is also impressive to note how the model can give different feedback, depending on the context. With the video script, ChatGPT’s feedback took into account that the final consumption of that content would have been on screen. On the other hand, the essay’s introduction lives in a more formal and academic context, with a specific structure, that ChatGPT was able to capture once more.Last but not least, ChatGPT is also an excellent tool for translation. It knows at least 95 languages (if you have doubts about whether your language is supported, you can always ask ChatGPT directly). Here, however, there is a consideration that might arise: what is the added value of ChatGPT for translation when we already have cutting-edge tools such as Google Translate?To answer this question, we have to consider some key differentiators and how we can leverage ChatGPT’s embedded translations capabilities:ChatGPT can capture the intent. This means that you could also bypass the translation phase since it is something that ChatGPT can do in the backend. For example, if you write a prompt to produce a social media post in French, you could write that prompt in any language you want – ChatGPT will automatically detect it (without the need to specify it in advance) and understand your intent:Figure – Example of ChatGPT generating an output in a language that is different from the inputChatGPT can capture the more refined meaning of particular slang or idioms. This allows for a translation that is not literal so that it can preserve the underlying meaning. Namely, let’s consider the British expression It’s not my cup of tea, to indicate something that is not the type of thing you like. Let’s ask both ChatGPT and Google Translate to translate it into Italian:Figure – Comparison between ChatGPT and Google Translate while translating from English into ItalianAs you can see, ChatGPT can provide several Italian idioms that are equivalent to the original one, also in their slang format. On the other hand, Google Translate performed a literal translation, leaving behind the real meaning of the idiom.As with any other task, you can always provide context to ChatGPT. So, if you want your translation to have a specific slang or style, you can always specify it in the prompt. Or, even funnier, you can ask ChatGPT to translate your prompt with a sarcastic touch:       Figure – Example of ChatGPT translating a prompt with a sarcastic touch. The original content of the prompt was taken from OpenAI’s Wikipedia pageAll these scenarios highlight one of the key killing features of ChatGPT and OpenAI models in general. Since they represent the manifestation of what OpenAI defined as Artificial General Intelligence (AGI), they are not meant to be specialized (that is, constrained) on a single task. On the contrary, they are meant to serve multiple scenarios dynamically so that you can address a wide range of use cases with a single model.In conclusion, ChatGPT is able not only to generate new text but also to manipulate existing material to tailor it to your needs. It has also proven to be very precise at translating between languages, also keeping the jargon and language-specific expressions intact.In the next section, we will see how ChatGPT can assist us in retrieving information and competitive intelligence.Quick information retrieval and competitive intelligenceInformation retrieval and competitive intelligence are yet other fields where ChatGPT is a game-changer. The very first example of how ChatGPT can retrieve information is the most popular way it is used right now: as a search engine. Every time we ask ChatGPT something, it can retrieve information from its knowledge base and reframe it in an original way.One example involves asking ChatGPT to provide a quick summary or review of a book we might be interested in reading:Figure - Example of ChatGPT providing a summary and review of a bookQuick information retrieval and competitive intelligence Alternatively, we could ask for some suggestions for a new book we wish to read based on our preferences:Figure – Example of ChatGPT recommending a list of books, given my preferencesFurthermore, if we design the prompt with more specific information, ChatGPT can serve as a tool for pointing us toward the right references for our research or studies.Namely, you might want to quickly retrieve some background references about a topic you want to learn more about – for example, feedforward neural networks. Something you might ask ChatGPT is to point you to some websites or papers where this topic is widely treated:Figure – Example of ChatGPT listing relevant referencesAs you can see, ChatGPT was able to provide me with relevant references to start studying the topic. However, it could go even further in terms of competitive intelligence.Let’s consider I’m writing a book titled Introduction to Convolutional Neural Networks – an Implementation with Python. I want to do some research about the potential competitors in the market. The first thing I want to investigate is whether there are already some competitive titles around, so I can ask ChatGPT to generate a list of existing books with the same content:Figure – Example of ChatGPT providing a list of competitive booksYou can also ask for feedback in terms of the saturation of the market you want to publish in:Figure– ChatGPT advising about how to be competitive in the marketFinally, let’s ask ChatGPT to be more precise about what I should do to be competitive in the market where I will operate:Figure – Example of how ChatGPT can suggest improvements regarding your book content to make it stand outChatGPT was pretty good at listing some good tips to make my book unique.Overall, ChatGPT can be a valuable assistant for information retrieval and competitive intelligence. However, it is important to remember the knowledge base cut-off is 2021: this means that, whenever we need to retrieve real-time information, or while making a competitive market analysis for today, we might not be able to rely on ChatGPT.Nevertheless, this tool still provides excellent suggestions and best practices that can be applied, regardless of the knowledge base cut-off.ConclusionAll the examples we saw in this article were modest representations of what you can achieve with ChatGPT to boost your productivity. These small hacks can greatly assist you with activities that might be repetitive (answering emails with a similar template rather than writing a daily routine) or onerous (such as searching for background documentation or competitive intelligence).Author BioValentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industry. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.
Read more
  • 0
  • 0
  • 13012

article-image-creating-cicd-pipelines-effortlessly-using-chatgpt
Sagar Lad
22 Jun 2023
5 min read
Save for later

Creating CI/CD Pipelines Effortlessly using ChatGPT

Sagar Lad
22 Jun 2023
5 min read
 In the fast-paced world of software development, continuous integration and continuous delivery (CI/CD) pipelines have become crucial for efficient and reliable software deployment. However, building and maintaining these pipelines can be a daunting task, requiring extensive knowledge and expertise. But what if there was a way to simplify this process and make it more accessible to developers of all skill levels? Enter ChatGPT, the groundbreaking language model developed by OpenAI. In this article, we explore how ChatGPT can effortlessly guide developers in building robust CI/CD pipelines, revolutionizing the way software is delivered.What is DevOps?DevOps team is a combination of the Development team and Operations team working together in order to accelerate the time to market and quality of the software development. DevOps way of working is more a shift in the mindset which has a major effect on the team and organization’s way of working. With the DevOps mindset, there are no silos between the development and operations teams. DevOps team mainly focuses on the automation to increase the reliability by enabling continuous integration, and continuous deployment pipelines.Image 1 : DevOps Lifecycle DevOps Lifecycle mainly consists of setting up an automated and collaborative environment to discover, plan, build, and test the artefacts. Once the artefacts are downloaded, they can be deployed to their respective environments. Throughout the DevOps lifecycle, the complete team has to work closely to maintain the alignment, velocity, and quality of the deliverables.DevOps implementation mainly involves below activities :Discover productAn iterative approach for agile planningBuild pipelines for branching, merging, and workflows for the development processAutomated Testing using CI pipelines Deployment using release pipelinesOperationalize and manage end-to-end IT deliveryContinuous monitoring of the deployed softwareContinuous feedback and improvements for future releaseChatGPT for DevOpsAutomation: DevOps Tools like Jenkins, Ansible, Terraform, etc provide workflow automation. On the other hand, we can use chatGPT to automate the below DevOps activities that require manual intervention.Automation Testing: Testing scenarios and automated testing can be enabled using the chatGPT as a part of the continuous build pipeline.Release documentation: Maintaining documentation for each feature is a manual and tedious task. With the help of chatGPT, we can write code in any language to create an automated documentation for each release. With ChatGPT, we can generate code for bicep templates and YAML for Azure DevOps, Terraform , Jenkins, and Lambda code for the DevOps activities.  DevSecOps Implementation: Security is a critical factor in order to develop any software. ChatGPT can be used to monitor cloud resources, manage and detect security vulnerabilities , and scan for networking issues, open ports, database, and storage configurations as per the industry standards and requirements.Continuous Monitoring: Customised dashboard for monitoring is a key metric for proactive monitoring and taking data-driven decisions. We can create a dashboard using the chatGPT to generate code including a variety of components such as charts, graphs, and tables.Let’s now ask chatGPT for each of these steps to create a DevOps process flow for a software project. The first step is to set up the Azure DevOps repository structure including the branching policies, and pre, and post-deployment approval:Image 2: Azure Devop Repo structure & branching policiesAs you can see, a recommendation from ChatGPT is to create a new Azure DevOps repository with proper naming conventions. In order to set up the branching policies, we need to configure the build validations, set up the reviewers, status check, and work item linking in the Azure Boards.Image 3: Azure DevOps Continuous Integration PipelineHere, we have requested chatGPT to create a YAML continuous integration build pipeline in Azure DevOps including the code quality checks and testing. ChatGPT provides us with a YAML pipeline that has multiple stages - one for sonarqube, one for fortify code quality checks, one for automation testing, and one to download the artefacts. Once the CI pipeline is ready, let’s ask ChatGPT to build IaC(Infrastructure as a Code Pipeline) to deploy Azure services like Azure Data Factory and Azure Databricks.Image 4 : Azure DevOps Continuous Deployment PipelinesHere, we can see the step-by-step process to build the continuous deployment pipelines which are using shell script to deploy the Azure Data Factory and Azure CLI to deploy the Azure Databricks. This pipeline also has an integration with the branches to include and it is using variable groups to create a generic pipeline. Let’s see how we can build monitoring dashboards using chatGPT:ConclusionSo chatGPT is not a threat to the DevOps engineers but it will boost up productivity by embracing the technology to set up and implement the DevOps way of working.  In order to get the desired results, detailed prompt input should be provided to generate content from chatGPT to meet the expectations.Author BioSagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Link - Medium , Amazon , LinkedIn
Read more
  • 0
  • 0
  • 12518

article-image-creating-a-data-model-with-chatgpt-is-easier-than-you-think
Sagar Lad
16 Jun 2023
6 min read
Save for later

Creating a Data Model with ChatGPT is Easier than you think

Sagar Lad
16 Jun 2023
6 min read
In today's data-driven world, the ability to build accurate and efficient data models is paramount for businesses and individuals alike. However, the process of constructing a data model can often be complex and daunting, requiring specialized knowledge and technical skills. But what if there was a way to simplify this process and make it accessible to a wider audience? Enter ChatGPT, a powerful language model developed by OpenAI. In this article, we will explore how ChatGPT can be leveraged to build data models easily, using a practical example. By harnessing the capabilities of ChatGPT, you'll discover how data modeling can become a more approachable and intuitive task for everyone, regardless of their technical background.Build Data Model with ChatGPTConsider data modeling as the process of drawing diagrams for software applications that provide an overview of all the data pieces they include. The data flow is depicted in the diagram using text and symbols. It serves as a model for creating a new database that will allow a company to utilize the data efficiently for its needs. The primary objective of the data model is to establish an overall picture of the types of data that are used, how they are kept in the system, the relationships between the data entities, and the various ways in which they can be arranged and grouped. The norms and procedures for gathering feedback from the business stakeholders are taken into consideration when building data models.The Data Model functions as a better understanding of what is designed, much like a roadmap or blueprint might. It offers a comprehensive review of the standardized methodologies and schema to define and manage data in a way that is common and uniform throughout the organization. According to the level of abstraction, there are three different types of data models.Conceptual Data Model: It provides a helicopter view of the system description, its organization, and business rules to be considered. Initial project requirements are captured using the conceptual model. It mainly consists of the business entities, their constraints and characteristics, and the relationship between them for data integrity and security requirements.Logical Data Model: The logical data model provides detailed insights into the concepts and relationship which consists of data attributes and the relationship between the entities. It is very much useful for data-driven projects or initiatives.Physical Data Model: It provides an overview of how the data should be stored physically within the database. It is a final design to implement the relational database including the relationship using the primary and foreign keys.                           Image 1 : Types of Data Modelling TechniquesThe data model was created using a variety of data modeling methodologies, as seen in the graphic above. The most popular data modeling technique utilized by any corporate organization is entity relationship modeling, also known as dimensional modeling. Erwin Data Modeler, ER/Studio, Archi, and other tools are available on the market to construct data models utilizing these data modeling methodologies. The data Modelling technique mainly involves below steps :  Identify the entitiesFind the entity propertiesCreate a relationship between the entitiesCorrelated attributes to the entityDefine the degree of normalization to improve the performanceValidate and Finalise the data modelLet’s start with creating a data model using chatGPT. The goal is to ask chatGPT to start with the data modeling activities for the anti-money laundering(AML) system of a banking domain: Image 1: The data model for the banking system, Part 1  Image 2: Data Modelling  for AML Process for Bank As you can see in the image, once we provide an input to the chatGPT, it provides a step-by-step process of building the data model. The first step is to understand the AML regulations and identify the stakeholders for the system to capture the requirements. Once the stakeholders are identified, the next step is to define the data modeling goals including the list of data sources, and perform the data profiling. Once data profiling steps are done, the next activity is to create a conceptual, logical, and physical data model.Now, Let’s check with chatGPT to create a conceptual model with all the entities. Image 3: Conceptual Data Model, Part 1                 Image 4 : AML Conceptual ModelAfter the input, chatGPT responds with the list of actors, entities, and relationships between the entities to define the conceptual model. With this information, we can have a high-level overview of the system by building the conceptual data model. Let’s ask chatGPT to build the logical data model once the conceptual data model is ready:Image 5: AML data model for ERwin Tool                  Image 6 : AML Logical Data Model, Part 2 As you can see in the above image, step by step process to create a logical data model is to open the Erwin Tool and create a new data model. In the new data model, add all entities, their attributes, and the relationship between the entities. Once entities are defined, set up primary and foreign keys for all entities and validate the data model. After the validation, adjust the review comments and finalize the logical data model and generate the documentation for the same.Next, Let’s ask chatGPT if it can add new customer information to the existing conceptual model.                  Image 5 : AML Logical Data Model with Customer Information As we can see in the above image, chatGPT asks to first identify the source information and create an entity and attributes for the same. Once it is done, we have to define the cardinality to understand how entities are related to each other. Then define primary and foreign key relationships, data model validation and generate documentation.ConclusionIn this article, we understood the importance of building the data model and step by step process to create the data model. Later in this article, we also checked how to use chatGPT to create conceptual and logical data models.Author BioSagar Lad is a Cloud Data Solution Architect with a leading organisation and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Link - Medium , Amazon , LinkedIn 
Read more
  • 0
  • 0
  • 12503
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chatgpt-and-ai-in-the-crypto-market
Julian Melanson
12 Jul 2023
6 min read
Save for later

ChatGPT and AI in the Crypto Market

Julian Melanson
12 Jul 2023
6 min read
OpenAI’s ChatGPT has gained significant attention since it first launched, and with its versatile capabilities and high accuracy, it has the potential to make a substantial impact on the crypto market. It’s crucial that we explore how AI and Natural Language Processing (NLP) can assist in fraud detection and prevention, understand the capabilities and limitations of ChatGPT in the crypto industry and trading, highlight the importance of AI in safeguarding the crypto market, and discuss ChatGPT's role in crypto compliance, AML, security, and its future implications.AI and NLP for Fraud Detection and PreventionNLP is a branch of AI that enables machines to read, understand, and draw conclusions from human languages. By using computational linguistics and statistical models, NLP can show suspicious behavior patterns and uncover fraud in financial transactions. For instance, NLP can detect inconsistencies in credit applications or identify suspicious transactions on credit cards.Capabilities and Limitations of ChatGPT in the Crypto IndustryIn the crypto industry, ChatGPT has various applications, particularly in trading. It can supply a historical overview of a certain type of cryptocurrency, analyze market data, forecast price movements, create strategies, and find trading opportunities. By leveraging ChatGPT, traders can make better-informed decisions and capitalize on emerging possibilities.Here is a very simple example of how ChatGPT can aid in creating a strategy for identifying Chainlink bottoms using the relative strength index (RSI), support and resistance levels, and moving averages:While ChatGPT can elucidate various aspects of the crypto arena, it's imperative to recognize its potential limitations, particularly pertaining to the source reliability of its information. The internet is fraught with misinformation, and since the advent of GPT-4 that offers web-browsing capabilities, such misinformation could inadvertently affect AI tools like ChatGPT. Within the volatile crypto market, such unreliable information can lead to imprudent investments. It’s advisable to fact-check the data ChatGPT provides to mitigate the risk of utilizing information from dubious sources.The Importance of AI in Safeguarding the Crypto MarketThe adoption of blockchain technology has brought benefits such as increased openness, data consistency, and security. By integrating AI with blockchain, a more secure and intelligent system can be established. Blockchain ensures the integrity of shared information and models used by AI, while AI enhances fraud detection capabilities. The combination of AI and blockchain creates a more resilient system that is resistant to attacks and fraud.ChatGPT in Crypto Fraud DetectionChatGPT, with its NLP capabilities, can contribute to fraud detection in the crypto market in several ways:Identifying Suspicious Transactions and Activities: By analyzing emails for suspicious language patterns and detecting anomalies, ChatGPT can help identify potential fraud. It can compare email texts to earlier communications from the same individual, ensuring consistency and detecting deviations.Analyzing Patterns and Anomalies in Crypto Trading Data: ChatGPT can analyze market data and find significant patterns and trends that can aid traders in making informed decisions.Monitoring Social Media and External Sources: ChatGPT can help compliance teams in monitoring chat and social networking platforms for suspicious activities, such as market manipulation and insider trading.Utilizing Advanced Machine Learning Algorithms for Risk Assessment: Machine learning algorithms can predict the likelihood of default on loans or identify risky transactions. This information helps lenders make more informed decisions and manage risks effectively.ChatGPT in Crypto Compliance and AMLIdentifying and Verifying the Identity of Crypto Traders and Investors: ChatGPT excels in identifying and verifying the identity of traders and investors, ensuring the authenticity of individuals involved in crypto transactions.Monitoring for Money Laundering and Financial Crimes: By leveraging AI capabilities, compliance teams can monitor transactions and identify suspicious patterns indicative of money laundering or other financial crimes.Keeping Up with Regulatory Changes and Compliance Requirements: AI chatbots like ChatGPT can adapt to regulatory changes and comply with requirements set by authorities to ensure seamless operations within legal frameworks.Developing and Implementing Effective KYC and AML Procedures: NLP and monitored machine learning techniques play a vital role in streamlining Know Your Customer (KYC) procedures. These technologies facilitate efficient identity verification and analysis of unstructured content.ChatGPT in Crypto SecurityProtecting Crypto Assets and Digital Wallets: AI tools like ChatGPT enhance security measures in crypto exchanges and platforms, safeguarding digital assets and wallets from potential threats.Enhancing Security in Crypto Exchanges and Platforms: ChatGPT helps in verifying the identities of investors, bolstering the overall security mechanism of crypto exchanges and platforms.Identifying and Preventing Phishing and Hacking Attempts: AI algorithms can block unauthorized smart contracts and reduce the risk of phishing and hacking attacks, thereby enhancing the security of the crypto industry.Developing and Implementing Advanced Security Protocols: AI algorithms and machine learning techniques help organizations identify vulnerabilities in their security architecture and improve overall system security.Future Developments and Implications of ChatGPT in CryptoAdvancements in NLP and AI are expected to have a significant impact on fraud detection and prevention. As society moves toward a cashless economy, the role of AI in identifying and preventing digital fraud becomes increasingly critical. ChatGPT's ability to fine-tune popular themes enables traders to stay updated on crypto news, retrieve relevant data, and generate trading strategies based on historical information. However, it is crucial to consider the ethical implications and potential risks associated with using AI for fraud detection and in financial systems. Responsible and informed use of AI technologies can contribute to building trust and credibility in the crypto market.ChatGPT, with its advanced NLP capabilities, offers exciting possibilities for the crypto market. Its potential to enhance fraud detection, bolster security measures, and build trust and credibility is promising. However, it is essential to approach AI adoption in the crypto market cautiously, taking into account ethical considerations and potential implications. As technology continues to evolve, the responsible and informed use of AI can pave the way for a safer and more efficient crypto ecosystem.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 12427

article-image-interactive-email-phishing-training-with-chatgpt
Clint Bodungen
03 Jan 2024
14 min read
Save for later

Interactive Email Phishing Training with ChatGPT

Clint Bodungen
03 Jan 2024
14 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, ChatGPT for Cybersecurity Cookbook, by Clint Bodungen. Master ChatGPT and the OpenAI API, and harness the power of cutting-edge generative AI and large language models to revolutionize the way you perform penetration testing, threat detection, and risk assessment.IntroductionWith the rise of cyber threats, organizations of all sizes are increasingly aware of the importance of training their staff on email phishing, a common and potentially dangerous tactic employed by cybercriminals. In this recipe, we'll be using ChatGPT to create a tool for interactive email phishing training. This recipe guides you through the process of crafting a specialized prompt to turn ChatGPT into a simulation tool for phishing attack awareness. With this approach, you can use ChatGPT to train users to identify potential phishing emails, thereby increasing their awareness and helping to protect your organization from potential security threats. What makes this truly powerful is its interactive nature. ChatGPT will present the user with a series of email scenarios. The user will then decide whether the email is a phishing attempt or a legitimate email, and can even ask for more details such as the URL to a link in the email or header information, for example. ChatGPT will provide feedback, ensuring a continuous, engaging, and efficient learning experience. Additionally, we will also cover how to use Python in conjunction with these prompts to create exportable email simulation scenarios. This feature can be beneficial in situations where you might want to use the generated scenarios outside of ChatGPT, such as in a live course or in a Learning Management System (LMS). Getting ready Before diving into this recipe, ensure you have your OpenAI account set up and your API key on hand. If not, you should refer back to Chapter 1 for the necessary setup details. You will also need Python version 3.10.x or later.  Additionally, confirm you have the following Python libraries installed: 1. openai: This library enables you to interact with the OpenAI API. Install it using the command pip install openai. 2. os: This is a built-in Python library, which allows you to interact with the operating system, especially for accessing environment variables. 3. tqdm: This library is utilized for showing progress bars during the policy generation process. Install it with pip install tqdm. How to do it… In this section, we will walk you through the process of creating an interactive email phishing training simulation using ChatGPT. The instructions are step-by-step, starting from logging into your OpenAI account and ending with generating phishing training simulations. 1. Access the ChatGPT interface. Log into your OpenAI account and go to the ChatGPT interface at https://chat.openai.com. 2.Initialize the simulation by entering the specialized prompt. The following prompt is carefully designed to instruct ChatGPT to act as a phishing training simulator. Enter the prompt into the text box and press Enter. "You are a cybersecurity professional and expert in adversarial social engineering tactics, techniques, and procedures, with 25 years of experience. Create an interactive email phishing training simulation (for employees). Provide no other response other than to ask the question, "Is the following email real or a phishing attempt? (You may ask clarification questions such as URL information, header information, etc.)" followed by simulated email, using markdown language formatting. The email you present can represent a legitimate email or a phishing attempt, which can use one or more various techniques. Provide no further generation or response until I answer the question. If I answer correctly, just respond with "Correct" and a short description to further explain the answer, and then restart the process from the beginning. If I answer incorrectly, respond with "Incorrect", then the correct answer, then a short description to further explain the answer. Then repeat the process from the beginning.    Present me with only 3 simulations in total throughout the process and remember my answer to them all. At least one of the simulations should simulate a real email. After the last question has been answered, and after your response, end the assessment and give me my total score, the areas I did well in and where I need to improve." Tip Be sure to change the number of simulations ChatGPT provides, to suit your needs. Now, ChatGPT will generate interactive email phishing scenarios based on your instructions. Respond to each scenario as if you were the employee undergoing the training. After the third scenario and your final response, ChatGPT will calculate and provide your total score, areas of strength, and areas for improvement. How it works… At the heart of this recipe lies the specialized prompt. This prompt is constructed to instruct ChatGPT to act as an interactive phishing training tool, delivering a series of email phishing scenarios. The prompt follows certain design principles which are essential to its effectiveness and interaction with the OpenAI models. Here, we'll dissect those principles: 1. Defining the role: The prompt starts by setting up the role of the AI model – a cybersecurity professional and expert in adversarial social engineering tactics, techniques, and procedures, with 25 years of experience. By defining the AI's persona, we direct the model to generate responses using the knowledge and expertise expected from such a role. 2. Detailed instructions and simulation: The instructions given in the prompt are meticulously detailed, and it is this precision that enables ChatGPT to create effective and realistic phishing simulations. The prompt asks the AI model to generate a phishing email scenario, followed by the question, "Is the following email real or a phishing attempt?". Notably, the AI model is given the liberty to provide additional clarifying questions, such as asking about URL information, header information, etc., giving it the freedom to generate more nuanced and complex scenarios. By asking the model to generate these emails using markdown language formatting, we ensure that the simulated emails have the structure and appearance of genuine emails, enhancing the realism of the simulation. The AI is also instructed to present emails that can represent either legitimate correspondence or a phishing attempt, ensuring a diverse range of scenarios for the user to evaluate. How can ChatGPT convincingly simulate phishing emails? Well, ChatGPT's strength comes from the wide variety of text it has been trained on, including (but not limited to) countless examples of email correspondences and probably some instances of phishing attempts or discussions around them. From this extensive training, the model has developed a robust understanding of the format, tone, and common phrases used in both legitimate and phishing emails. So, when prompted to simulate a phishing email, it can draw on this knowledge to generate a believable email that mirrors the features of a real-world phishing attempt. As the model doesn't generate responses until it receives an answer to its question, it guarantees an interactive user experience. Based on the user's response, the model provides relevant feedback ("Correct" or "Incorrect"), the correct answer if the user was wrong, and a brief explanation. This detailed, immediate feedback aids the learning process and helps to embed the knowledge gained from each simulated scenario. It's worth noting that, although the model has been trained to generate human-like text, it doesn't understand the content in the same way humans do. It doesn't have beliefs, opinions, or access to real-time, world-specific information or personal data unless explicitly provided in the conversation. Its responses are merely predictions based on its training data. The carefully designed prompt and structure are what guide the model to generate useful, contextually appropriate content for this particular task. 3. Feedback mechanism: The prompt instructs the AI to provide feedback based on the user's answer, further explaining the answer. This creates an iterative feedback loop that enhances the learning experience. 4. Keeping track of progress: The prompt instructs the AI to present three simulations in total and to remember the user's answer to all of them. This ensures continuity in the training and enables tracking of the user's progress. 5. Scoring and areas of improvement: After the final simulation and response, the prompt instructs the AI to end the assessment and provide a total score along with areas of strength and areas for improvement. This helps the user understand their proficiency and the areas they need to focus on for improvement. ChatGPT’s models are trained on a broad range of internet text. However, it's essential to note that it does not know specifics about which documents were part of its training set or have access to any private, confidential, or proprietary information. It generates responses to prompts by recognizing patterns and producing text that statistically aligns with the patterns it observed in its training data. By structuring our prompt in a way that clearly defines the interactive assessment context and expected behavior, we're able to leverage this pattern recognition to create a highly specialized interactive tool. The ability of the OpenAI models to handle such a complex and interactive use case demonstrates their powerful capability and flexibility. There’s more… If you're using a Learning Management System (LMS) or conducting a live class, you might prefer to have a list of scenarios and details rather than an interactive method like ChatGPT. In these settings, it's often more practical to provide learners with specific scenarios to ponder and discuss in a group setting. The list can also be used for assessments or training materials, offering a static reference point that learners can revisit as needed, or as content for a phishing simulation system. By modifying the script from the previous recipe, you can instruct the ChatGPT model to produce a set of phishing email simulations complete with all necessary details. The resulting text can be saved into a file for easy distribution and usage in your training environment. Since this script is so similar to the one from the previous recipe, we’ll just cover the modifications instead of steppping through the entire script again. Let's walk through the necessary modifications: 1.      Rename and modify the function: The function generate_question is renamed to generate_email_simulations, and its argument list and body are updated to reflect its new purpose. It will now generate the phishing email simulations instead of cybersecurity awareness questions. This is done by updating the messages that are passed to the OpenAI API within this function. def generate_email_simulations() -> str:    # Define the conversation messages    messages = [        {"role": "system", "content": 'You are a cybersecurity professional and expert in adversarial social engineering tactics, techniques, and procedures, with 25 years of experience.'},        {"role": "user", "content": 'Create a list of fictitious emails for an interactive email phishing training. The emails can represent a legitimate email or a phishing attempt, using one or more various techniques. After each email, provide the answer, contextual descriptions, and details for any other relevant information such as the URL for any links in the email, header information. Generate all necessary information in the email and supporting details. Present 3 simulations in total. At least one of the simulations should simulate a real email.'},    ]    ... Note You can adjust the number of scenarios here to suit your needs. In this example, we've requested 3 scenarios. 2. Remove unnecessary code: The script no longer reads content categories from an input file, as it's not required in your use case. 3. Update variable and function names: All variable and function names referring to "questions" or "assessment" have been renamed to refer to "email simulations" instead, to make the script more understandable in the context of its new purpose. 4. Call the appropriate function: The generate_email_simulations function is called instead of the generate_question function. This function initiates the process of generating the email simulations. # Generate the email simulations email_simulations = generate_email_simulations() Tip Like the previous method, more scenarios will require a model that supports a larger context window. However, the gpt-4 model seems to provide better results in terms of accuracy, depth, and consistency with its generations for this recipe. Here’s how the complete script should look: import openai import os import threading import time from datetime import datetime # Set up the OpenAI API openai.api_key = os.getenv("OPENAI_API_KEY") current_datetime = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') assessment_name = f"Email_Simulations_{current_datetime}.txt" def generate_email_simulations() -> str:    # Define the conversation messages    messages = [        {"role": "system", "content": 'You are a cybersecurity professional and expert in adversarial social engineering tactics, techniques, and procedures, with 25 years of experience.'},        {"role": "user", "content": 'Create a list of fictitious emails for an interactive email phishing training. The emails can represent a legitimate email or a phishing attempt, using one or more various techniques. After each email, provide the answer, contextual descriptions, and details for any other relevant information such as the URL for any links in the email, header information. Generate all necessary information in the email and supporting details. Present 3 simulations in total. At least one of the simulations should simulate a real email.'},    ]    # Call the OpenAI API    response = openai.ChatCompletion.create(        model="gpt-3.5-turbo",        messages=messages,        max_tokens=2048,        n=1,        stop=None,        temperature=0.7,    )    # Return the generated text    return response['choices'][0]['message']['content'].strip() # Function to display elapsed time while waiting for the API call def display_elapsed_time():    start_time = time.time()    while not api_call_completed:        elapsed_time = time.time() - start_time        print(f"\rElapsed time: {elapsed_time:.2f} seconds", end="")        time.sleep(1) api_call_completed = False elapsed_time_thread = threading.Thread(target=display_elapsed_time) elapsed_time_thread.start() # Generate the report using the OpenAI API try:    # Generate the email simulations    email_simulations = generate_email_simulations() except Exception as e:    print(f"\nAn error occurred during the API call: {e}")    exit() api_call_completed = True elapsed_time_thread.join() # Save the email simulations into a text file try:    with open(assessment_name, 'w') as file:        file.write(email_simulations)    print("\nEmail simulations generated successfully!") except Exception as e:    print(f"\nAn error occurred during the email simulations generation: {e}") By running this modified script, the ChatGPT model is directed to generate a series of interactive email phishing training scenarios. The script then collects the generated scenarios, checks them for errors, and writes them to a text file. This gives you a ready-made training resource that you can distribute to your learners or incorporate into your LMS or live training sessions. ConclusionIn conclusion, leveraging ChatGPT for interactive email phishing training empowers users with immersive, realistic simulations, bolstering cybersecurity awareness and defense. This innovative approach fosters a proactive stance against threats, ensuring organizations stay ahead in the ever-evolving landscape of cyber risks. With adaptable learning and dynamic feedback, ChatGPT transforms education, creating a robust line of defense against potential security breaches.Author BioClint Bodungen is a cybersecurity professional with 25+ years of experience and the author of Hacking Exposed: Industrial Control Systems. He began his career in the United States Air Force and has since many of the world's largest energy companies and organizations, working for notable cybersecurity companies such as Symantec, Kaspersky Lab, and Booz Allen Hamilton. He has published multiple articles, technical papers, and training courses on cybersecurity and aims to revolutionize cybersecurity education using computer gaming (“gamification”) and AI technology. His flagship product, ThreatGEN® Red vs. Blue, is the world’s first online multiplayer cybersecurity simulation game, designed to teach real-world cybersecurity.
Read more
  • 0
  • 0
  • 12412

article-image-chatgpt-for-financial-analysis-palo-alto-networks
James Bryant, Alok Mukherjee
05 Dec 2023
11 min read
Save for later

ChatGPT for Financial Analysis: Palo Alto Networks

James Bryant, Alok Mukherjee
05 Dec 2023
11 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, The Future of Finance with ChatGPT and Power BI, by James Bryant, Alok Mukherjee. Enhance decision-making, transform your market approach, and find investment opportunities by exploring AI, finance, and data visualization with ChatGPT's analytics and Power BI's visualsIntroductionIn this section, we will explore an interesting example of how ChatGPT can be used to analyze and summarize earnings reports, enabling you to identify key insights and trends quickly. With the vast amount of information available in earnings reports, it can be challenging to sift through data and identify the most critical elements. Let’s see how ChatGPT can help.Here’s the scenario – Palo Alto Networks has just released its quarterly earnings report. You want to understand the company’s financial performance and identify any trends or potential issues that may impact the stock price or investment potential:Step 1 – Extract key data points:To get started, provide ChatGPT with the relevant earnings report data, such as revenue, net income, EPS, and any other important metrics. Be sure to include both current and historical data for comparison purposes. You can either input this data manually or automate the process using an API or web scraper. Let’s explore the automated process to add Palo Alto Networks’ financial information from September 2021 to March 2023 to ChatGPT.Step 1.1 – Automating data collection with Python and API/web scraping:1.  Choose a financial API or web scraping library in Python:If using an API, explore options such as Alpha Vantage (alphavantage.co):Obtain an API key from the Alpha Vantage website (free and paid versions).Choose a method – Python requests.  Make a request.If web scraping, use libraries such as Requests and Beautiful SoupFor web scraping, identify the URLs of the company’s financial statements or earnings reports from websites such as Yahoo Finance (finance.yahoo.com), Nasdaq (nasdaq. com), or the company’s investor relations page.2. Set up your Python script for data collection:For APIs: a. Import the necessary libraries (e.g., requests or pandas) – for example, import requests import pandas as pd. b. Define the API key, endpoint URL, and required parameters. c. Make a request to the API to fetch data using the requests library. d. Parse the response data and convert it into a pandas DataFrame.For web scraping: a. Import the necessary libraries (e.g., requests, BeautifulSoup, or pandas) – for example, import requests from bs4 import BeautifulSoup import pandas as pd. b. Define the URL(s) containing the financial data. c. Use the requests library to fetch the HTML content of the web page. d. Parse the HTML content using BeautifulSoup to extract the required financial data. e. Convert the extracted data into a pandas DataFrame.3. Collect historical data from September 2021 to March 2023 for the relevant financial metrics:Adjust the parameters in your API request or web scraping script to target the specified date range.4. Save the collected data in a structured format, such as a CSV file or a pandas DataFrame, for further processing and analysis:Use pandas’ DataFrame.to_csv() method to save the collected data as a CSV fileAlternatively, keep the data in a pandas DataFrame for further analysis within the Python script.With these additions, you should have a better understanding of where to obtain financial data and the necessary Python libraries to import for their data collection scripts.We will now provide a step-by-step guide using Python code for Palo Alto Networks’ financial data.Extract Palo Alto Networks’ quarterly financial data (revenue, net income, and EPS) for the time period September 2021–March 2023, and save it in a CSV file as text input, using the Alpha Vantage API key (finance website):1. Install the necessary Python package and pandas library in Command Prompt:pip install requests pip install pandas2. Create a new Python script file in Notepad, Notepad++, PyCharm, or Visual Studio code. It is important that you add your Alpha Vantage API key in the following api_key line. Copy and paste the following code into your Python script file, and name it PANW.py:import requests import pandas as pd api_key = "YOUR_API_KEY" symbol = "PANW" url = f"https://www.alphavantage.co/ query?function=EARNINGS&symbol={symbol}&apikey={api_key}" try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses data = response.json() if 'quarterlyEarnings' in data: quarterly_data = data['quarterlyEarnings'] df = pd.DataFrame(quarterly_data) df_filtered = df[(df['reportedDate'] >= '2021-09-01') & (df['reportedDate'] <= '2023-03-31')] df_filtered.to_csv("palo_alto_financial_data.csv", index=False) input_text = "Analyze the earnings data of Palo Alto Networks from September 2021 to March 2023.\n\n" for idx, row in df_filtered.iterrows(): quarter = idx + 1 revenue = row.get('revenue', 'N/A') net_income = row.get('netIncome', 'N/A') eps = row.get('earningsPerShare', 'N/A') input_text += f"Quarter {quarter}:\n" input_text += f"Revenue: ${revenue}\n" input_text += f"Net Income: ${net_income}\n" input_text += f"Earnings Per Share: ${eps}\n\n" with open("palo_alto_financial_summary.txt", "w") as f: f.write(input_text) else: print("Data not available.") except requests.RequestException as e: print(f"An error occurred: {e}")3.  Run the Python script file:Python PANW.py4.  A separate text file, palo_alto_financial_summary.txt, and a CSV file, palo_ alto_financial_data.csv, will be created once the Python script has been executed:When the Python script, PANW.py, is executed, it performs several tasks to fetch and analyze the earnings data of Palo Alto Networks (the symbol PANW). First, it imports two essential libraries – requests to make API calls and pandas for data manipulation.The script starts by defining a few key variables – the API key to access financial data, the stock symbol of the company, and the URL to the Alpha Vantage API where the data can be retrieved. Then, a try block is initiated to safely execute the following operations.The script uses the requests.get() method to query the Alpha Vantage API. If the request is successful, the response is parsed as JSON and stored in a variable named data. It then checks whether data contains a key called quarterly earnings.If this key exists, the script proceeds to convert the quarterly earnings data into a pandas DataFrame. It filters this DataFrame to include only the entries between September 2021 and March 2023. The filtered data is then saved as a CSV file named palo_alto_financial_ data.csv:The CSV file contains raw financial data in tabular formThe CSV file can be imported into Excel, Google Sheets, or other specialized data analysis toolsThe script also constructs a text-based summary of the filtered earnings data, including revenue, net income, and EPS for each quarter within the specified date range. This summary is saved as a text file named palo_alto_financial_summary.txt:The TXT file provides a human-readable summary of the financial data for Palo Alto Networks for the specified data rangeTXT files can be used for quick overviews and presentationsIf any errors occur during this process, such as a failed API request, the script will catch these exceptions and print an error message, thanks to the except block. This ensures that the script fails gracefully, providing useful feedback instead of crashing.You can upload the CSV file (palo_alto_financial_data.csv) to ChatGPT directly if you are a ChatGPT Plus user by following these steps:Uploading a CSV file directly into ChatGPT is supported through the Advanced Data Analysis option for ChatGPT Plus users. You can access the OpenAI website at https://openai.com/, and then log in using your login credentials. Once logged in, access your Settings and Beta options by clicking on the three dots near your email address in the bottom-left corner of the screen. Go to Beta features and activate the Advanced data analysis function by moving the slider to the right to activate (the option will turn green). Once this feature is activated, go to GPT-4 at the top center of the screen and then select Advanced Data Analysis from the drop-down list. You can click on the plus sign in the dialog box to upload the CSV file to ChatGPT:•  CSV file size limitations: 500 MB•  CSV file retention: Files are retained while a conversation is active and for three hours after the conversation is pausedIf you are not a ChatGPT Plus user, follow the following instructions using the OpenAI API to upload the CSV file (palo_alto_financial_data.csv) into ChatGPT, and analyze the data using the GPT 3.5 turbo model:1. Create a new Python script file in Notepad, Notepad++, PyCharm, or Visual Studio Code. It is important that you add your OpenAI API key to the following api_key line. Copy and paste the following code into your Python script file and name it OPENAIAPI.py:import openai import pandas as pd df = pd.read_csv("palo_alto_financial_data.csv") csv_string = df.to_string(index=False) api_key = "your_openai_api_key_here" openai.api_key = api_key input_text = f"Here is the financial data for Palo Alto Networks:\n\n{csv_string}\n\nPlease analyze the data and provide insights." response = openai.Completion.create( engine="gpt-3.5-turbo", # Specifying GPT-3.5-turbo engine prompt=input_text, max_tokens=200 # Limiting the length of the generated text ) generated_text = response.choices[0].text.strip() print("GPT-3.5-turbo PANW Analysis:", generated_text)2.  Run the Python script file:Python OPENAIAPI.pyThis Python code snippet is responsible for interacting with the OpenAI API to send the formatted text input (the financial data prompt) to ChatGPT and receive the generated response. Here’s a breakdown of each part:The Python code snippet starts by importing two essential Python libraries – openai for interacting with the OpenAI API, and pandas for data manipulation.The script reads financial data from a CSV file named palo_alto_financial_data. csv using pandas, converting this data into a formatted string. It then sets up the OpenAI API by initializing it with a user-provided API key.Following this, the script prepares a prompt for GPT-3.5-turbo, consisting of the loaded financial data and a request for analysis. This prompt is sent to the GPT-3.5-turbo engine via the OpenAI API, which returns a text-based analysis, limited to 200 tokens.The generated analysis is then extracted from the API’s response and printed to the console with the label “GPT-3.5-turbo PANW Analysis.” The script essentially automates the process of sending financial data to the GPT-3.5-turbo engine for insightful analysis, making it easy to obtain quick, AI-generated insights on Palo Alto Networks’ financial performance.ConclusionIn conclusion, harnessing ChatGPT's capabilities, we've navigated Palo Alto Networks' earnings landscape. From automated data extraction to insightful analysis, this journey unveiled crucial financial trends. Whether utilizing APIs or scraping web data, the process demystified complexities, offering a streamlined approach. By generating comprehensive summaries and interacting with ChatGPT for deeper insights, the pathway to understanding financial data has been simplified. Embracing AI-powered analysis enables swift comprehension of earnings reports, empowering informed decisions in the realm of financial scrutiny and investment strategies.Author BioJames Bryant, a finance and technology expert, excels at identifying untapped opportunities and leveraging cutting-edge tools to optimize financial processes. With expertise in finance automation, risk management, investments, trading, and banking, he's known for staying ahead of trends and driving innovation in the financial industry. James has built corporate treasuries like Salesforce and transformed companies like Stanford Health Care through digital innovation. He is passionate about sharing his knowledge and empowering others to excel in finance. Outside of work, James enjoys skiing with his family in Lake Tahoe, running half marathons, and exploring new destinations and culinary experiences with his wife and daughter.Aloke Mukherjee is a seasoned technologist with over a decade of experience in business architecture, digital transformation, and solutions architecture. He excels at applying data-driven solutions to real-world problems and has proficiency in data analytics and planning. Aloke worked at EMC Corp and Genentech and currently spearheads the digital transformation of Finance Business Intelligence at Stanford Health Care. In addition to his work, Aloke is a Certified Personal Trainer and is passionate about helping his clients stay fit. Aloke also has a passion for wine and exploring new vineyards.
Read more
  • 0
  • 0
  • 12336

article-image-ai-distilled-21-mlagentbench-as-ai-research-agents-openais-python-sdk-and-ai-chip-amd-acquires-nodai-ibm-enhances-pytorch-for-ai-inference-microsoft-to-tackle-gpu-shortage
Merlyn Shelley
13 Oct 2023
12 min read
Save for later

AI_Distilled #21: MLAgentBench as AI Research Agents, OpenAI’s Python SDK and AI Chip, AMD Acquires Nod.ai, IBM Enhances PyTorch for AI Inference, Microsoft to Tackle GPU Shortage

Merlyn Shelley
13 Oct 2023
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“Scientific experimentation involves an iterative process of creating hypotheses, designing experiments, running experiments, and analyzing the results. Can we build AI research agents to perform these long-horizon tasks? To take a step towards building and evaluating research agents on such open-ended decision-making tasks -- we propose MLAgentBench, a suite of ML tasks for benchmarking AI research agents.” - from the paper Benchmarking Large Language Models as AI Research Agents (arXivLabs, Oct 2023), proposed by Qian Huang, Jian Vora, Percy Liang, Jure Leskovec. Stanford University researchers are addressing the challenge of evaluating AI research agents with free-form decision-making abilities through MLAgentBench, a pioneering benchmark. This framework provides research tasks with task descriptions and required files, allowing AI agents to mimic human researchers' actions like reading, writing, and running code. The evaluation assesses proficiency, reasoning, research process, and efficiency.Welcome to AI_Distilled #21, your weekly source for the latest breakthroughs in AI, ML, GPT, and LLM. In this edition, we’ll talk about Microsoft and Google introducing new AI initiatives for healthcare, OpenAI unveiling the beta version of Python SDK for enhanced API access, IBM’s enhancement of PyTorch for AI inference, targeting enterprise deployment, and AMD working on enhancing its AI capabilities with the acquisition of Nod.ai and getting a quick look at OpenAI’s ambitious new ventures in AI chipmaking to tackle the global chip shortage. We know how much you love our curated collection of AI tutorials and secret knowledge. We’ve packed some great knowledge resources in this issue covering recent advances in enhancing content safety with Azure ML, understanding autonomous agents for problem solving with LLMs, and enhancing code quality and security with Generative AI, Amazon Bedrock, and CodeGuru. 📥 Feedback on the Weekly EditionWhat do you think of this issue and our newsletter?Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  ⚡ TechWave: AI/GPT News & AnalysisMicrosoft and Google Introduce New Gen AI Initiatives for Healthcare: Microsoft and Alphabet's Google have unveiled separate AI initiatives to assist healthcare organizations in improving data access and information management. Google's project, powered by Google Cloud, aims to simplify the retrieval of patient data, including test results and prescriptions, in one central location. It also intends to help healthcare professionals with administrative tasks that often lead to work overload and burnout. Meanwhile, Microsoft's initiative is focused on enabling healthcare entities to efficiently aggregate data from various doctors and hospitals, eliminating the time-consuming search for information.  OpenAI Mulls Chip Independence Due to Rising Costs: OpenAI, known for its ChatGPT AI model, is considering developing its own AI chips due to the growing costs of using Nvidia's hardware. Each ChatGPT query costs OpenAI around 4 cents, and the company reportedly spends $700,000 daily to run ChatGPT. Nvidia accounts for over 70% of AI chip sales but is becoming costly for OpenAI. The organization has been in discussions about making its own chips but has not made a final decision. Microsoft is also exploring in-house chip development, potentially competing with Nvidia's H100 GPU. OpenAI may remain dependent on Nvidia for the time being. Microsoft May Unveil AI Chip at Ignite 2023 to Tackle GPU Shortage: Microsoft is considering debuting its own AI chip at the upcoming Ignite 2023 conference due to the high demand for GPUs, with NVIDIA struggling to meet this demand. The chip would be utilized in Microsoft's data center servers and to enhance AI capabilities within its productivity apps. This move reflects Microsoft's commitment to advancing AI technology following a substantial investment in OpenAI. While Microsoft plans to continue purchasing NVIDIA GPUs, the development of its own AI chip could increase profitability and competitiveness with tech giants like Amazon and Google, who already use their custom AI chips. OpenAI Unveils Beta Version of Python SDK for Enhanced API Access: OpenAI has released a beta version of its Python SDK, aiming to improve access to the OpenAI API for Python developers. This Python library simplifies interactions with the OpenAI API for Python-based applications, providing an opportunity for early testing and feedback ahead of the official version 1.0 launch. The SDK streamlines integration by offering pre-defined classes for API resources and ensuring compatibility across different API versions. OpenAI encourages developers to explore the beta version, share feedback, and shape the final release. The library supports various tasks, including chat completions, text model completions, embeddings, fine-tuning, moderation, image generation, and audio functions.  IBM Enhances PyTorch for AI Inference, Targeting Enterprise Deployment: IBM is expanding the capabilities of the PyTorch machine learning framework beyond model training to AI inference. The goal is to provide a robust, open-source alternative for inference that can operate on multiple vendor technologies and both GPUs and CPUs. IBM's efforts involve combining three techniques within PyTorch: graph fusion, kernel optimizations, and parallel tensors to speed up inference. Using these optimizations, they achieved impressive inference speeds of 29 milliseconds per token for a large language model with 70 billion parameters. While these efforts are not yet ready for production, IBM aims to contribute these improvements to the PyTorch project for future deployment, making PyTorch more enterprise-ready. AMD Enhances AI Capabilities with Acquisition of Nod.ai: AMD has announced its intention to acquire Nod.ai, a startup focused on optimizing AI software for high-performance hardware. This acquisition underlines AMD's commitment to the rapidly expanding AI chip market, which is projected to reach $383.7 billion by 2032. Nod.ai's software, including the SHARK Machine Learning Distribution, will accelerate the deployment of AI models on platforms utilizing AMD's architecture. By integrating Nod.ai's technology, AMD aims to offer open software solutions to facilitate the deployment of highly performant AI models, thereby enhancing its presence in the AI industry.   🔮 Expert Insights from Packt Community Machine Learning Engineering with MLflow - By Natu Lauchande Developing your first model with MLflow From the point of view of simplicity, in this section, we will use the built-in sample datasets in sklearn, the ML library that we will use initially to explore MLflow features. For this section, we will choose the famous Iris dataset to train a multi-class classifier using MLflow. The Iris dataset (one of sklearn's built-in datasets available from https://scikit-learn.org/stable/datasets/toy_dataset.html) contains the following elements as features: sepal length, sepal width, petal length, and petal width. The target variable is the class of the iris: Iris Setosa, Iris Versocoulor, or Iris Virginica: Load the sample dataset: from sklearn import datasets from sklearn.model_selection import train_test_split dataset = datasets.load_iris() X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.4) Next, let's train your model. Training a simple machine model with a framework such as scikit-learn involves instantiating an estimator such as LogisticRegression and calling the fit command to execute training over the Iris dataset built in scikit-learn: from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train) The preceding lines of code are just a small portion of the ML Engineering process. As will be demonstrated, a non-trivial amount of code needs to be created in order to productionize and make sure that the preceding training code is usable and reliable. One of the main objectives of MLflow is to aid in the process of setting up ML systems and projects. In the following sections, we will demonstrate how MLflow can be used to make your solutions robust and reliable. Then, we will add MLflow. With a few more lines of code, you should be able to start your first MLflow interaction. In the following code listing, we start by importing the mlflow module, followed by the LogisticRegression class in scikit-learn. You can use the accompanying Jupyter notebook to run the next section: import mlflow from sklearn.linear_model import LogisticRegression mlflow.sklearn.autolog() with mlflow.start_run():    clf = LogisticRegression()    clf.fit(X_train, y_train) The mlflow.sklearn.autolog() instruction enables you to automatically log the experiment in the local directory. It captures the metrics produced by the underlying ML library in use. MLflow Tracking is the module responsible for handling metrics and logs. By default, the metadata of an MLflow run is stored in the local filesystem. The above content is extracted from the book Machine Learning Engineering with MLflow written by Natu Lauchande and published in Aug 2021. To get a glimpse of the book's contents, make sure to read the free chapter provided here, or if you want to unlock the full Packt digital library free for 7 days, try signing up now! To learn more, click on the button below.   Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM ResourcesBoosting Model Inference Speed with Quantization: In the realm of deploying deep learning models, efficiency is key. This post offers a primer on quantization, a technique that significantly enhances the inference speed of hosted language models. Quantization involves reducing the precision of data types used for weights and activations, such as moving from 32-bit floating point to 8-bit integers. While this may slightly affect model accuracy, the benefits are substantial: reduced memory usage, faster inference times, lower energy consumption, and the ability to deploy models on edge devices. The post explains two common approaches for quantization: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), helping you understand how to implement them effectively.  Unlocking Database Queries with Text2SQL: A Historical Perspective and Current Advancements: In this post, you'll explore the evolution of Text2SQL, a technology that converts natural language queries into SQL for interacting with databases. Beginning with rule-based approaches in the 1960s, it has transitioned to machine learning-based models, and now, LLMs like BERT and GPT have revolutionized it. Discover how LLMs enhance Text2SQL, the challenges it faces, and prominent products like Microsoft LayoutLM, Google TAPAS, Stanford Spider, and GuruSQL. Despite challenges, Text2SQL holds great promise for making database querying more convenient and intelligent in practical applications. Enhancing Content Safety with Azure ML: Learn how to ensure content safety in Azure ML when using LLMs. By setting up Azure AI Content Safety and establishing a connection within Prompt Flow, you'll scrutinize user input before directing it to the LLM. The article guides you through constructing the flow, including directing input to content safety, analyzing results, invoking the LLM, and consolidating the final output. With this approach, you can prevent unwanted responses from LLM and ensure content safety throughout the interaction.  💡 Masterclass: AI/LLM TutorialsUnderstanding Autonomous Agents for Problem Solving with LLMs: In this post, you'll explore the concept of autonomous LLM-based agents, how they interact with their environment, and the key modules that make up these agents, including the Planner, Reasoner, Actioner, Executor, Evaluator, and more. Learn how these agents utilize LLMs' inherent reasoning abilities and external tools to efficiently solve intricate problems while avoiding the limitations of fine-tuning.Determining the Optimal Chunk Size for a RAG System with LlamaIndex: When working with retrieval-augmented generation (RAG) systems, selecting the right chunk size is a crucial factor affecting efficiency and accuracy. This post introduces LlamaIndex's Response Evaluation module, providing a step-by-step guide on how to find the ideal chunk size for your RAG system. Considering factors like relevance, granularity, and response generation time, the optimal balance typically found around 1024 for a RAG system.Understanding the Power of Rouge Score in Model Evaluation: Evaluating the effectiveness of fine-tuned language models like Mistral 7B Instruct Model requires a reliable metric, and the Rouge Score is a valuable tool. This article provides a step-by-step guide on how to use the Rouge Score to compare finetuned and base language models effectively. This assesses the similarity of words generated by a model to reference words provided by humans using unigrams, bigrams, and n-grams. Mastering this metric, you'll be able to make informed decisions when choosing between different model versions for specific tasks. Enhancing Code Quality and Security with Generative AI, Amazon Bedrock, and CodeGuru: In this post, you'll learn how to use Amazon CodeGuru Reviewer, Amazon Bedrock, and Generative AI to enhance the quality and security of your code. Amazon CodeGuru Reviewer provides automated code analysis and recommendations, while Bedrock offers insights and code remediation. The post outlines a detailed solution involving CodeCommit, CodeGuru Reviewer, and Bedrock.  Exploring Generative AI with LangChain and OpenAI: Enhancing Amazon SageMaker Knowledge: In this post, the author illustrates the process of hosting a Machine Learning Model with the Generative AI ecosystem, using LangChain, a Python framework that simplifies Generative AI applications, and OpenAI's LLMs. The goal is to see how well this solution can answer SageMaker-related questions, addressing the challenge of LLMs lacking access to specific and recent data sources.   🚀 HackHub: Trending AI Toolsleptonai/leptonai: ̉Python library for simplifying AI service creation, offering a Pythonic abstraction (Photon) for converting research code into a service, simplified model launching, prebuilt examples, and AI-specific features. okuvshynov/slowllama: Enables developers to fine-tune Llama2 and CodeLLama models, including 70B/35B, on Apple M1/M2 devices or Nvidia GPUs, emphasizing fine-tuning without quantization. yaohui-wyh/ctoc: A lightweight tool for analyzing codebases at the token level, which is crucial for understanding and managing the memory and conversation history of LLMs.  eric-ai-lab/minigpt-5: ̉A model for interleaved vision-and-language generation using generative vokens to enable the simultaneous generation of images and textual narratives, particularly in the context of multimodal applications.
Read more
  • 0
  • 0
  • 12008
article-image-designing-decoder-only-transformer-models-like-chatgpt
Saeed Dehqan
28 Aug 2023
9 min read
Save for later

Designing Decoder-only Transformer Models like ChatGPT

Saeed Dehqan
28 Aug 2023
9 min read
IntroductionEmbark on an enlightening journey into the ChatGPT stack, a remarkable feat in AI-driven language generation. Unveiling its evolution from inception to a proficient AI assistant, we delve into decoder-only transformers, specialized for crafting Shakespearean verses and informative responses.Throughout this exploration, we dissect the four integral stages that constitute the ChatGPT stack. From exhaustive pretraining to fine-tuned supervised training, we unravel how rewards and reinforcement learning refine response generation to align with context and user intent.In this blog, we will get acquainted briefly with the ChatGPT stack and then implement a simple decoder-only transformer to train on Shakespeare.Creating ChatGPT models consists of four main stages:1.    Pretraining:2.    Supervised Fine Tuning3.    Reward modeling4.    Reinforcement learningThe Pretraining stage takes most of the computational time since we train the language model on trillions of tokens. The following table shows the Data Mixtures used for pretraining of LLaMA Meta Models [0]:The datasets come and mix together, according to the sampling proportion, to create the pretraining data. The table shows the datasets along with their corresponding sampling proportion (What portion of the pre-trained data is the dataset?), epochs (How many times do we train the model on the corresponding datasets?), and dataset size. It is obvious that the epoch of high-quality datasets such as Wikipedia, and Books is high and as a result, the model grasps high-quality datasets better.After we have our dataset ready, the next step is Tokenization before training. Tokenizing data means mapping all the text data into a large list of integers. In language modeling repositories, we usually have two dictionaries for mapping tokens (a token is a sub word. Like ‘wait’, and ‘ing’ are two tokens.) into integers and vice versa. Here is an example:In [1]: text = "it is obvious that the epoch of high .." In [2]: tokens = list(set(text.split())) In [3]: stoi = {s:i for i,s in enumerate(tokens)} In [4]: itos = {i:s for s,i in stoi.items()} In [5]: stoi['it'] Out[5]: 22 In [6]: itos[22] Out[6]: 'it'Now, we can tokenize texts with the following functions:In [7]: encode = lambda text: [stoi[x] for x in text.split()] In [8]: decode = lambda encoded: ' '.join([itos[x] for x in encoded]) In [9]: tokenized = encode(text) In [10]: tokenized Out[10]: [22, 19, 18, 5, ...] In [11]: decode(tokenized) Out[11]: 'it is obvious that the epoch of high ..'Suppose the tokenized variable contains all the tokens converted to integers (say 1 billion tokens). We select 3 chunks of the list randomly that each chunk contains 10 tokens and feed-forward them into a transformer language model to predict the next token. The model’s input has a shape (3, 10), here 3 is batch size and 5 is context length. The model tries to predict the next token for each chunk independently. We select 3 chunks and predict the next token for each chunk to speed up the training process. It is like running the model on 3 chunks of data at once. You can increase the batch size and context length depending on the requirements and resources. Here’s an example:For convenience, we wrote the token indices along with the corresponding tokens. For each chunk or sequence, the model predicts the whole sequence. Let’s see how this works:By seeing the first token (it), the model predicts the next token (is). The context token(s) is ‘it’ and the target token for the model is ‘is’. If the model fails to predict the target token, we do backpropagation to adjust model parameters so the model can predict correctly.During the process, we mask out or hide the future tokens so that the model can’t have access to the future tokens. Because it is kind of cheating. We want the model itself to predict the future by only seeing the past tokens. That makes sense, right? That’s why we used a gray background for future tokens, which means the model is not able to see them.After predicting the second token, we have two tokens [it, is] as context to predict what token comes next in the sequence. Here is the third token (obvious).By using the three previous tokens [it, is, obvious], the model needs to predict the fourth token (that). And as usual, we hide the future tokens (in this case ‘the’).We give [it, is, obvious, that] to the model as the context in order to predict ‘the’. And finally, we give all the sequence as context [it, is, obvious, that, the] to predict the next token.We have five predictions for a sequence with a length of five.After training the model on a lot of randomly selected sequences from the pre-trained dataset, the model should be ready to autocomplete your sequence. Give it a sequence of tokens, and then, it predicts the next token and based on what was predicted plus previous tokens, the model predicts the next tokens one by one. We call it an autoregressive model. That’s it.But, at this stage, the model is not an AI assistant or a chatbot. It only receives a sequence and tries to complete the sequence. That’s how we trained the model. We don’t train it to answer questions and listen to the instructions. We give it context tokens and the model tries to predict the next token based on the context.You give it this:“In order to be irrational, you first need to”And the model continues the sequence:“In order to be irrational, you first need to abandon logical reasoning and disregard factual evidence.”Sometimes, you ask it an instruction:“Write a function to count from 1 to 100.”And instead of trying to write a function, the model answers with more similar instructions:“Write a program to sort an array of integers in ascending order.”“Write a script to calculate the factorial of a given number.”“Write a method to validate a user's input and ensure it meets the specified criteria.”“Write a function to check if a string is a palindrome or not.”That’s where prompt engineering came in. People tried to use some tricks to get the answer to a question out of the model.Give the model the following prompt:“London is the capital of England.Copenhagen is the capital of Denmark.Oslo is the capital of”The model answers like this:“Norway.”So, we managed to get something helpful out of it with prompt engineering. But we don’t want to provide examples every time. We want to ask it a question and receive an answer. To prepare the model to be an AI assistant, we need further training named Supervised Fine Tuning for instructional purposes.In the Supervised Fine-Tuning stage, we make the model instructional. To achieve this goal the model needs training on a high quality 15k-100K of prompt and response dataset. Here’s an example of it: { "instruction": "When was the last flight of Concorde?", "context": "", "response": "On 26 November 2003", "category": "open_qa" }This example was taken from the databricks-dolly-15k dataset that is an open-source dataset for Supervised/Instruction Fine Tuning[1]. You can download the dataset from here. Instructions have seven categorizations including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This is because we want to train the model in different tasks. For instance, the above instruction is open QA, meaning the question is a general one and does not require reasoning abilities. It teaches the model to answer general questions. Closed QA requires reasoning abilities. During Instruction fine-tuning, nothing will change algorithmically. We do the same process as the previous stage (Pretraining). We gave instructions as context tokens and we want the model to continue the sequence with response.We continue this process for thousands of examples and then, the model is ready to be instructional. But that’s not the end of the story of the model behind ChatGPT. OpenAI designed a supervised reward modeling that returns a reward for the sequences that were made by the base model for the same input prompt. They give the model a prompt and run the model four times, for instance, to have four different answers for the same prompt. The model produces different answers each time because of the sampling method they use. Then, the reward model receives the input prompt and the produced answers to get a reward score for each answer. The better the answer, the better the reward score is. The model requires ground-truth scores to be trained and these scores came from labelers who worked for OpenAI. Labelers were given prompt text and model responses and they ranked them from the best to the worst.At the final stage, the ChatGPT uses Reinforcement Learning with Human Feedback (RLHF) to generate responses that get the best scores from the rewarding model. RL is an architecture that tries to find the best way of achieving a goal. The goal can be checkmate in chess or creating the best answer for the input prompt. The RL learning process is like doing an action and getting a reward or penalty for the action. And we do not take actions that end up penalizing. RLHF is what made ChatGPT so good:The PPO-ptx shows the win rate of GPT + RLHF compared to SFT (Supervised Fine-Tuned model), GPT with prompt engineering, and GPT base.ConclusionIn summation, the ChatGPT stack exemplifies AI's potent fusion with language generation. From inception to proficient AI assistant, we've traversed core stages – pretraining, fine-tuning, and reinforcement learning. Decoder-only transformers have enlivened Shakespearean text and insights.Tokenization's role in enabling ChatGPT's prowess concludes our journey. This AI evolution showcases technology's synergy with creative text generation.ChatGPT's ascent highlights AI's potential to emulate human-like language understanding. With ongoing refinement, the future promises versatile conversational AI that bridges artificial intelligence and language's artistry, fostering human-AI understanding.Author BioSaeed Dehqan trains language models from scratch. Currently, his work is centered around Language Models for text generation, and he possesses a strong understanding of the underlying concepts of neural networks. He is proficient in using optimizers such as genetic algorithms to fine-tune network hyperparameters and has experience with neural architecture search (NAS) by using reinforcement learning (RL). He implements models starting from data gathering to monitoring, and deployment on mobile, web, cloud, etc. 
Read more
  • 0
  • 0
  • 11874

article-image-chatgpt-for-power-developers
Jakov Semenski
17 Oct 2023
7 min read
Save for later

ChatGPT for Power Developers

Jakov Semenski
17 Oct 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!IntroductionWhat Power Developers Know About ChatGPT's Capabilities That You Don't?You've tinkered with ChatGPT, got some fun replies, and maybe even used it for some quick Q&A.But there's a feeling of missing out, isn't there?ChatGPT feels like a vast ocean, and you've only skimmed the surface.Deep down, you know there's more. What's the secret sauce?It's like having a sports car and only driving in the first gear. ChatGPT is built for more, way more.Hold on to your coding hat, because there's a blueprint, a set of hidden levers and buttons that power users are pressing.Ready to get in on the secret?Envision a world where you're not just using ChatGPT but mastering it.Every challenge, every coding puzzle, you've got a secret weapon.Welcome to the world of Power Developers.Here are 3 advanced prompts you can use to up your AI skills so you can harness ChatGPT like never beforePowerPointYou are about to experience how to create customized, memorable presentations.I will show you how to use ChatGPT to automate your presentation outline generation and generate jaw-dropping content that keeps your viewers engaged.Instead of starting off with blank slides, we will use a format from one of the best Presentation trainers Jason Teteak.Here is the full megaprompt , now don’t get overwhelmed with the length. You just need to replace the TOPIC and AUDIENCE parts.TOPIC= Why do we need Spring framework AUDIENCE= Junor developers who know Java Create a presentation outline for {TOPIC} and {AUDIENCE} by using Famous presentation framework from Jason Teteak from his book Rule the room Make sure to Identify what Audience Wants • What are your biggest concerns or worries? • What are the biggest challenges you have with those areas? • What are the problems they are causing? • What's your ideal outcome? • What would getting that outcome do for vou? Use takeaways Start with an action verb. The trick to doing this is to mentally insert the words "As a result of my presentation, you will be able to..." at the beginning of the phrase. • Use seven words or less. A string of seven items is the maximum number people can hold in their short-term memorv. • Use familiar words. Avoid what I call cliquespeak-using words or assuming a grasp of concepts people new to or unfamiliar to vour field won't understand Identify pain and pleasure pointes, and say how the takleways relieve pain points and enhance pleasure points Define how the takeaways offer happiness, success and/or freedom Create title according to formula Start with an action verb, use 7 words or less, and use familiar words Use the following format For slides use markdown Title is h1 Content is using bulletpoints For what you say use italic and add "You say:" Give your credentials Tell the audience how what you do will help them. Example: "I help community bankers find new income sources. Deliver the main hook Example: "I'm going to offer you a new source of income with less risk plus the expertise you need to expand services to old customers and attract new ones." Main Agenda slide - Complete list of takeaways Highlighted Takeway #1 slide Task Slide #1 - Complete list of tasks for takeaway #1 What you say: Takeway #1 hook sentence Example slide What you say Highlighted Takeway #2 slide Task Slide #2 - Complete list of tasks for takeaway #2 What you say: Takeway #2 hook sentence Highlighted Takeway #3 slide Task Slide #3 - Complete list of tasks for takeaway #3 What you say: Takeway #3 hook sentence Example slide Summary Slide - Complete list of takeaways What you say: Takeway #3 hook sentence Final Slide What you say - offer to stay for individual questions - Thank the audience - add a pleasantry to conclude the presentation (e.g. Have a great day) Here is the full conversation: https://chat.openai.com/share/e116d8c4-b267-466e-9d9e-39799f073e24Here is what you can get from this prompt:Simulate running an appLet’s imagine you want to demo a backend running up.You need to present it to coworkers, or just verify how the final app might work.You would need:have a working coderunning server (locally or in the cloud)running storage (e.g. database)and tools to engage (create GET or POST requests to interact)What if I told you that ChatGPT can do all for you with only 1 prompt?Here is a full prompt, you can just replace the APP part:APP: Spring rest application that persist list of conferences in mysql database, it exposes GET and POST mapping Imagine there is mysql database already running with conferences table. An application can be accessed by invoking GET or POST requests I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. Imagine for a given {APP} we are in the directory where directory which contains full application code. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do no write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd. Here is the chat: https://chat.openai.com/share/74dad74d-8a59-43e8-8c5c-042dfcecda99You get an output of starting an app, or making a POST request to add a conference.ChatGPT did not actually run the code, but frankly, it did an excellent job of simulating everything.Creating Educational OutlineEver noticed how most educational content out there feels like it’s either too basic or way over your head?It's like there's no middle ground.Endless hours scrolling, and reading, but in the end, you're still at square one.That's not learning; that's a wild goose chase.But wait, what if there's a different way?A formula, perhaps, to craft content that resonates, educates, and empowers?Imagine diving into educational material that sparks curiosity, drives understanding, and equips you with actionable insights.It’s time to revolutionize educational content for developers.Be authentic, be clear, and always keep the learner at the heart of your content.Now replace COURSE NAME and AUDIENCE according to your needs.COURSE NAME= How to start writing that are fun and easy Java tests AUDIENCE= Junior developers You are an expert developer in crafting authentic, clear training outline that always keeps the learner at the heart of your content. It sparks curiosity, drives understanding, and equips you with actionable insights. I need you to create an outline for a 5-part educational course called {COURSE NAME} Give this course 3 examples of compelling course names For context, this audience are {AUDIENCE} Your output should be formatted like this: # NAME OF THE COURSE with 3 examples ## PART OF THE COURSE ### Idea 1 - Sub point 1 - Sub point 2 - Sub point 3 ### Idea 2 - Sub point 1 - Sub point 2 - Sub point 3 ### Idea 3 - Sub point 1 - Sub point 2 - Sub point 3 Every PART should be a headline for the respective part Every Idea is one Heading inside that PART Every Sub point is supportive of the above idea Here is the link: https://chat.openai.com/share/096f48c4-8886-4d4c-a051-49eb1516b730And screenshot of the outputConclusionIn conclusion, ChatGPT holds the key to a new realm of coding mastery. By delving into the advanced prompts and hidden techniques, you're poised to become a true Power Developer. Embrace this journey, unleash ChatGPT's potential, and pave the way for a future where you're not just using AI but shaping it to your advantage. With a mix of storytelling, real-world examples, and interactivity, you can craft content that developers crave.Author BioJakov Semenski is an IT Architect working at IBMiX with almost 20 years of experience.He is also a ChatGPT Speaker at the WeAreDevelopers conference and shares valuable tech stories on LinkedIn.
Read more
  • 0
  • 0
  • 11306

article-image-ai-distilled-23-apples-gen-ai-nvidias-eureka-ai-agent-qualcomms-snapdragon-elite-x-chips-dalle-3-in-chatgpt-plus-pytorch-edges-executorch-rl-with-cloud-tpus
Merlyn Shelley
27 Oct 2023
12 min read
Save for later

AI_Distilled #23: Apple’s Gen AI, Nvidia's Eureka AI Agent, Qualcomm’s Snapdragon Elite X chips, DALL·E 3 in ChatGPT Plus, PyTorch Edge’s ExecuTorch, RL with Cloud TPUs

Merlyn Shelley
27 Oct 2023
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,Welcome to another scintillating edition of AI_Distilled, featuring recent advancements in training and fine-tuning LLMs, GPT and AI models for enhanced business outcomes. Let’s get started with this week’s news and analysis with an industry expert’s opinion. “For me, the biggest opportunity we have is AI. Just like the cloud transformed every software category, we think AI is one such transformational shift. Whether it's in search or our Office software.” - Satya Nadella, CEO, Microsoft.  AI is indeed the biggest opportunity for mankind, a paradigm shift that can fundamentally redefine everything we know across industries. Recent reports suggest Apple’s deployment of cloud-based and on-device edge AI in iPhones and iPads in 2024. Qualcomm’s newly unveiled Snapdragon Elite X chips will find use in Microsoft Windows “AI PCs” for AI acceleration of tasks ranging from email summarization to image creation. It’s remarkable how AI has disrupted even PC environments for everyday users.  This week, we’ve brought you industry developments including DALL·E 3 unveiling for ChatGPT Plus and Enterprise users, Universal Music Group suing Anthropic over copyrighted lyrics distribution, OpenAI in talks for $86 billion valuation, surpassing leading tech firms, and Mojo SDK’s availability for Macs, unleashing AI power on Apple Silicon.  Look out for our curated collection of AI secret knowledge and tutorials on PyTorch Edge unveiling ExecuTorch for on-device inference, scaling reinforcement learning with cloud TPUs, building an IoT sensor network with AWS IoT Core and Amazon DocumentDB, and deploying embedding models with Hugging Face inference endpoints. 📥 Feedback on the Weekly EditionWhat do you think of this issue and our newsletter?Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt     SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis👉 Apple Aims to Introduce Generative AI to iPhone and iPad in Late 2024: Tech analyst Jeff Pu suggests that Apple is planning to integrate generative AI into its devices, beginning as early as late 2024. Apple is expected to deploy a combination of cloud-based and on-device edge AI. This move is aimed at letting users automate complex tasks and enhance Siri's capabilities, possibly starting with iOS 18. Apple remains cautious about privacy and responsible use of AI, acknowledging potential biases and hallucinations. 👉 DALL·E 3 Unveiled for ChatGPT Plus and Enterprise Users: OpenAI has introduced DALL·E 3 in ChatGPT, offering advanced image generation capabilities for Plus and Enterprise users. This feature allows users to describe their desired images, and DALL·E 3 creates a selection of visuals for them to refine and iterate upon within the chat. OpenAI has incorporated safety measures to prevent the generation of harmful content. Moreover, they are researching a provenance classifier to identify AI-generated images.  👉 Universal Music Group Sues AI Company Anthropic Over Copyrighted Lyrics Distribution: Universal Music Group and music publishers have filed a lawsuit against Anthropic for distributing copyrighted lyrics through its AI model Claude 2. The complaint alleges that Claude 2 can generate lyrics closely resembling copyrighted songs without proper licensing, even when not explicitly prompted to do so. The music publishers claim that while other lyric distribution platforms pay to license lyrics, Anthropic omits essential copyright management information.  👉 Nvidia's Eureka AI Agent, Powered by GPT-4, Teaches Robots Complex Skills: Nvidia Research has introduced Eureka, an AI agent driven by GPT-4 from OpenAI, capable of autonomously training robots in intricate tasks. Eureka can independently craft reward algorithms and has successfully instructed robots in various activities, including pen-spinning tricks and opening drawers. It also published the Eureka library of AI algorithms, allowing experimentation with Nvidia Isaac Gym. This innovative work leverages the potential of LLMs and Nvidia's GPU-accelerated simulation technologies, marking a significant step in advancing reinforcement learning methods.   👉 OpenAI in Talks for $86 Billion Valuation, Surpassing Leading Tech Firms: OpenAI, the company responsible for ChatGPT, is reportedly in discussions to offer its employees' shares at an astounding $86 billion valuation, surpassing tech giants like Stripe and Shein. This tender offer is in negotiation with potential investors, although final terms remain unconfirmed. With Microsoft holding a 49% stake, OpenAI is on its way to achieving an annual revenue of $1 billion. If this valuation holds, it would place OpenAI among the ranks of SpaceX and ByteDance, becoming one of the most valuable privately held firms globally.  👉 Mojo SDK Now Available for Mac: Unleashing AI Power on Apple Silicon: The Mojo SDK, which has seen considerable success on Linux systems, is now accessible for Mac users, specifically Apple Silicon devices. This development comes in response to user feedback and demand. The blog post outlines the steps for Mac users to get started with the Mojo SDK. Additionally, there's a Visual Studio Code extension for Mojo, offering a seamless development experience. The Mojo SDK's remarkable speed and performance on Mac, taking full advantage of hardware capabilities, is highlighted. 👉 Qualcomm Reveals Snapdragon Elite X Chip for AI-Enhanced Laptops: Qualcomm introduced the Snapdragon Elite X chip for Windows laptops, optimized for AI tasks like email summarization and text generation. Google, Meta, and Microsoft plan to use these features in their devices, envisioning a new era of "AI PCs." Qualcomm aims to rival Apple's chips, claiming superior performance and energy efficiency. With the ability to handle AI models with 13 billion parameters, this chip appeals to creators and businesses seeking AI capabilities.  🔮 Expert Insights from Packt Community  Deep Learning with TensorFlow and Keras - Third Edition - By Amita Kapoor, Antonio Gulli, Sujit Pal Prediction using linear regression Linear regression is one of the most widely known modeling techniques. Existing for more than 200 years, it has been explored from almost all possible angles. Linear regression assumes a linear relationship between the input variable (X) and the output variable (Y). If we consider only one independent variable and one dependent variable, what we get is a simple linear regression. Consider the case of house price prediction, defined in the preceding section; the area of the house (A) is the independent variable, and the price (Y) of the house is the dependent variable.  We import the necessary modules. It is a simple example, so we’ll be using only NumPy, pandas, and Matplotlib: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import pandas as pd  Next, we generate random data with a linear relationship. To make it more realistic, we also add a random noise element. You can see the two variables (the cause, area, and the effect, price) follow a positive linear dependence: #Generate a random data np.random.seed(0) area = 2.5 * np.random.randn(100) + 25 price = 25 * area + 5 + np.random.randint(20,50, size = len(area)) data = np.array([area, price]) data = pd.DataFrame(data = data.T, columns=['area','price']) plt.scatter(data['area'], data['price']) plt.show() Now, we calculate the two regression coefficients using the equations we defined. You can see the result is very much near the linear relationship we have simulated: W = sum(price*(area-np.mean(area))) / sum((area-np.mean(area))**2) b = np.mean(price) - W*np.mean(area) print("The regression coefficients are", W,b)  ----------------------------------------------- The regression coefficients are 24.815544052284988 43.4989785533412 Let us now try predicting the new prices using the obtained weight and bias values: y_pred = W * area + b  Next, we plot the predicted prices along with the actual price. You can see that predicted prices follow a linear relationship with the area: plt.plot(area, y_pred, color='red',label="Predicted Price") plt.scatter(data['area'], data['price'], label="Training Data") plt.xlabel("Area") plt.ylabel("Price") plt.legend() This content is from the book “Deep Learning with TensorFlow and Keras - Third Edition” by Amita Kapoor, Antonio Gulli, Sujit Pal (Oct 2022). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below.Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources📀 The Advantages of Small LLMs: Smaller LLMs are easier to debug and don't require specialized hardware, which is crucial in today's chip-demanding market. They are cost-effective to run, expanding their applicability. Additionally, they exhibit lower latency, making them suitable for low-latency environments and edge computing. Deploying small LLMs is more straightforward, and they can even be ensembled for improved performance. 📀 PyTorch Edge Unveils ExecuTorch for On-Device Inference: The PyTorch Edge team has introduced ExecuTorch, a solution that empowers on-device inference on mobile and edge devices with the support of industry leaders like Arm, Apple, and Qualcomm Innovation Center. ExecuTorch aims to address the fragmentation in the on-device AI ecosystem by offering extension points for third-party integration to accelerate ML models on specialized hardware.  📀 AI-Boosted Software Development Journey: AI assistance simplifies design, code generation, debugging, and impact analysis, streamlining workflows and enhancing productivity. From idea to production, this post takes you through various stages of development, starting with collaborative design sessions aided by AI tools like Gmail's help me write and Google Lens. Duet AI for Google Cloud assists in code generation, error handling, and even test case creation. This AI assistance extends to operations, service health monitoring, and security.  📀 Scaling Reinforcement Learning with Cloud TPUs: Learn how Cloud TPUs are revolutionizing Reinforcement Learning by enhancing the training process for AI agents. This article explores the significant impact of TPUs on RL workloads, using the DeepPCB case as an example. Thanks to TPUs, DeepPCB achieved a remarkable 235x boost in throughput and a 90% reduction in training costs, significantly improving the quality of PCB routings. The Sebulba architecture, optimized for TPUs, is presented as a scalable solution for RL systems, offering reduced communication overhead, high parallelization, and improved scalability.   💡 Masterclass: AI/LLM Tutorials🎯 Building an IoT Sensor Network with AWS IoT Core and Amazon DocumentDB: Learn how to create an IoT sensor network solution for processing IoT sensor data via AWS IoT Core and storing it using Amazon DocumentDB (with MongoDB compatibility). This guide explores the dynamic nature of IoT data, making Amazon DocumentDB an ideal choice due to its support for flexible schemas and scalability for JSON workloads.  🎯 Building Conversational AI with Generative AI for Enhanced Employee Productivity: Learn how to develop a lifelike conversational AI agent using Google Cloud's generative AI capabilities. This AI agent can significantly improve employee productivity by helping them quickly find relevant information from internal and external sources. Leveraging Dialogflow and Google enterprise search, you can create a conversational AI experience that understands employee queries and provides them with precise answers.  🎯 A Step-by-Step Guide to Utilizing Feast for Enhanced Product Recommendations: In this comprehensive guide, you will learn how to leverage Feast, a powerful ML feature store, to build effective product recommendation systems. Feast simplifies the storage, management, and serving of features for machine learning models, making it a valuable tool for organizations. This step-by-step tutorial will walk you through configuring Feast with BigQuery and Cloud Bigtable, generating features, ingesting data, and retrieving both offline and online features.  🎯 Constructing a Mini GPT-Style Model from Scratch: In this tutorial, you’ll explore model architecture, demonstrating training and inference processes. Know the essential components, such as data processing, vocabulary construction, and data transformation functions. Key concepts covered include tokens, vocabulary, text sequences, and vocabulary indices. The article also introduces the Self-Attention module, a crucial component of transformer-based models.  🎯 Deploy Embedding Models with Hugging Face Inference Endpoints: In contrast to LLMs, embedding models are smaller and faster for inference, which is valuable for updating models or improving fine-tuning. The post guides you through deploying open-source embedding models on Hugging Face Inference Endpoints. It also covers running large-scale batch requests. Learn about the benefits of Inference Endpoints, Text Embeddings Inference, and how to deploy models efficiently.  🚀 HackHub: Trending AI Tools🔨 xlang-ai/OpenAgents: Open platform with Data, Plugins, and Web Agents for data analysis, versatile tool integration, and web browsing, featuring a user-friendly chat interface. 🔨 AI-Citizen/SolidGPT: Technology business boosting framework allowing developers to interact with their code repository, ask code-related questions, and discuss requirements. 🔨 SkalskiP/SoM: Unofficial implementation of Set-of-Mark (SoM) tools. Developers can use it by running Google Colab to work with this implementation, load images, and label objects of interest.🔨 zjunlp/factchd: Code for detecting fact-conflicting hallucinations in text for developers to evaluate factuality within text produced by LLMs, aiding in the detection of factual errors and enhancing credibility in text generation. 
Read more
  • 0
  • 0
  • 11200
article-image-integrating-chatgpt-api-with-microsoft-office
Martin Yanev
07 Jun 2023
6 min read
Save for later

Integrating ChatGPT API with Microsoft Office

Martin Yanev
07 Jun 2023
6 min read
In this article, we will explore how to set up a PyCharm project and install the docx Python library to extract text from Word documents. The docx library is a Python package that allows us to read and write Microsoft Word ( .docx) files and provides a convenient interface to access information stored in these files. The first step is to initiate your work by creating a new PyCharm project. This will enable you to have a dedicated area to craft and systematize your Translation app code.  Open PyCharm IDE on your systemClick on Create New Project from the welcome screen or go to File | New Project if you're already in the IDE Keep the default settingGive your project the name Translation AppClick on Create to create the project To run the language translation desktop app, you will need to install the following libraries:  openai: The  openai library allows you to interact with the OpenAI API and perform various natural language processing tasks.  docx: The  docx library allows you to read and write Microsoft Word files  .docx using Python.  tkinter: The  tkinter library is a built-in Python library that allows you to create graphical user interfaces (GUIs) for your desktop app.   As  tkinter is a built-in library, there is no need for installation since it already exists within your Python environment. To install the  openai and  docx libraries, access the PyCharm terminal by clicking on View | Tool Windows | Terminal, and then execute the following commands:  pip install openai pip install python-docx   To access and read the contents of a Word document, you will need to create a sample Word file inside your PyCharm project. Here are the steps to create a new Word file in PyCharm:  In PyCharm project create a new directory called files Right-click on the files folder and select New | File In the dialog box that appears, enter a file name with the extension .docx. For example, info.doc.  Select the Enter key to create the file Once the file is created, double-click on it to open it You can now add some text or content to this file, which we will later access and read using the docx library in Python. For this example, we have created an article about on New York City. However, you can choose any Word document containing text that you want to analyze.  The United States' most populous city, often referred to as New York City or NYC, is New York. In 2020, its population reached 8,804,190 people across 300.46 square miles, making it the most densely populated major city in the country and over two times more populous than the nation's second-largest city, Los Angeles. The city's population also exceeds that of 38 individual U.S. states. Situated at the southern end of New York State, New York City serves as the Northeast megalopolis and New York metropolitan area's geographic and demographic center - the largest metropolitan area in the country by both urban area and population. Over 58 million people also live within 250 miles of the city. A significant influencer on commerce, health care and life sciences, research, technology, education, politics, tourism, dining, art, fashion, and sports, New York City is a global cultural, financial, entertainment, and media hub. It houses the headquarters of the United Nations, making it a significant center for international diplomacy, and is often referred to as the world's capital.  Now that you have created the Word file inside your PyCharm project, you can move on to the next step, which is to create a new Python file called app.py inside the Translation App root directory. This file will contain the code to read and manipulate the contents of the Word file using the docx library. With the Word file and the Python file in place, you are ready to start writing the code to extract data from the document and use it in your application. To test if we can read word files with the docx Python library, we can implement the subsequent code in our app.py file:  Import docx  doc = docx.Document(“<full_path_to_docx_file>”)  text = “”  for para in doc.paragraphs:  text += para.text  print(text)  Make sure to replace the <full_path_to_docx_file> with the actual path to your Word document file. Obtaining the file path is a simple task, achieved by Right Click on your docx file in PyCharm and selecting the option Copy Path/Reference… from the drop-down menu.  Once you have done that, run the app.py file and verify the output. This code will read the contents of your Word document and print them to the Run Window console. If the text extraction works correctly, you should see the text of your document printed in the console (see figure below). The text variable now holds the data from the info.docx as a Python string.   Figure: Word text extraction console output SummaryThis section provided a step-by-step guide on how to set up a PyCharm project and install the docx Python library to extract text from Word documents. The section also included instructions on how to create a new Word file in PyCharm and use the docx library to read and manipulate its contents using Python.  Author BioMartin Yanev is an experienced Software Engineer who has worked in the aerospace and medical industries for over 8 years. He specializes in developing and integrating software solutions for air traffic control and chromatography systems. Martin is a well-respected instructor with over 280,000 students worldwide, and he is skilled in using frameworks like Flask, Django, Pytest, and TensorFlow. He is an expert in building, training, and fine-tuning AI systems with the full range of OpenAI APIs. Martin has dual master's degrees in Aerospace Systems and Software Engineering, which demonstrates his commitment to both practical and theoretical aspects of the industry. https://www.linkedin.com/in/martinyanev/https://www.udemy.com/user/martin-yanev-3/ 
Read more
  • 0
  • 0
  • 11102

article-image-writing-unit-tests-in-c-with-ease-using-chatgpt
Jesse Liberty
08 Jun 2023
9 min read
Save for later

Writing Unit Tests in C# with ease using ChatGPT

Jesse Liberty
08 Jun 2023
9 min read
The world is abuzz with talk of large language model AI, especially its iconic ChatGPT. No one is more excited and perhaps threatened than programmers. Will ChatGPT soon write code well enough to change or even eliminate our jobs? In this article, I set such concerns aside and focus on how ChatGPT can enhance our programming today. One of the places I find ChatGPT most useful is in the writing of unit tests. To illustrate this, I'm going to dip into the source code used in my latest book: .NET MAUI For C# Developers. If you haven't read my book (actually, why not stop here and run out and buy it? I can wait here) it illustrates the creation of a helpful app that, among other things, requires you to create an account. Here is the code for doing so:  [RelayCommand] async Task SignUp() {    if (EnableButton)    {      AccountCreateRequest accountCreateRequest = new()      {        Email = this.Email,        FullName = Name,        PlainPassword = Password      };      try      {        CreateAccountIsRunning = true;        await accountService.CreateAccount(accountCreateRequest);        CreateAccountIsRunning = false;        await Application.Current.MainPage.DisplayAlert(         "Sign up completed",        "Your user has been created successfully", "Ok");        await Shell.Current.GoToAsync("..");      }      catch (Exception e)      {          CreateAccountIsRunning = false;        await Application.Current.MainPage.DisplayAlert(         "Sign up failed",        "We were not able to create an account with that user name",         "Ok");      }    } } The logic here is simple, I've selected it specifically so that you can easily imagine how you would write unit tests. Let's see how ChatGPT does when presented with this code. I opened ChatGPT and entered the following message:  Write two unit tests with xUnit and nSubstitute in C# for the following .NET MAUI code (I pasted in the code. It responded so quickly it took my breath away)Sure! Here are two unit tests using xUnit and NSubstitute for the SignUp() method: I particularly like the self-assured and friendly "Sure!" at the beginning, and the restatement of what it has been asked to do. Its results come flying out in a box that has a handy Copy Code button on the top right, and a reminder on the top left that the code is in C# as shown in the figure:   Figure 1 – top of ChatGPT response  Notice that it marks the first section. // Arrange As you'll see it uses and annotates the AAA approach to unit tests (Arrange, Act, Assert). Here are the two-unit tests it produced: public class YourViewModelTests {    [Fact]    public async Task SignUp_WhenEnableButtonIsFalse_DoesNotCallAccountServiceAndDisplaysNoAlert()    {        // Arrange        var accountService = Substitute.For<IAccountService>();        var applicationMock = Substitute.For<Application>();        var mainPageMock = Substitute.For<Page>();        var shellMock = Substitute.For<Shell>();        applicationMock.MainPage = mainPageMock;        Shell.Current = shellMock;        var viewModel = new YourViewModel(accountService)        {            Email = "test@example.com",            Name = "John Doe",            Password = "password",            EnableButton = false        };        // Act        await viewModel.SignUp();        // Assert        await accountService.DidNotReceive().CreateAccount(Arg.Any<AccountCreateRequest>());        await applicationMock.MainPage.DidNotReceive().DisplayAlert(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<string>());        await shellMock.DidNotReceive().GoToAsync(Arg.Any<string>());    }    [Fact]    public async Task SignUp_WhenEnableButtonIsTrue_CallsAccountServiceAndDisplaysAlertOnError()    {        // Arrange        var accountService = Substitute.For<IAccountService>();        accountService            .When(x => x.CreateAccount(Arg.Any<AccountCreateRequest>()))            .Throw(new Exception("Failed to create account"));        var applicationMock = Substitute.For<Application>();        var mainPageMock = Substitute.For<Page>();        var shellMock = Substitute.For<Shell>();        applicationMock.MainPage = mainPageMock;        Shell.Current = shellMock;        var viewModel = new YourViewModel(accountService)        {            Email = "test@example.com",            Name = "John Doe",            Password = "password",            EnableButton = true        };        // Act        await viewModel.SignUp();        // Assert        await accountService.Received(1).CreateAccount(Arg.Is<AccountCreateRequest>(request =>            request.Email == viewModel.Email &&            request.FullName == viewModel.Name &&            request.PlainPassword == viewModel.Password));        await applicationMock.MainPage.Received(1).DisplayAlert("Sign up failed",            "We were not able to create an account with that user name", "Ok");        await shellMock.DidNotReceive().GoToAsync(Arg.Any<string>());    } }  After the code comes to these handy instructions and reminders: In the first test case, SignUp_WhenEnableButtonIsFalse_DoesNotCallAccountServiceAndDisplaysNoAlert(), we set EnableButton to false and verify that the SignUp() method does not call CreateAccount() on the accountService, and no alerts are displayed. In the second test case, SignUp_WhenEnableButtonIsTrue_CallsAccountServiceAndDisplaysAlertOnError(), we set EnableButton to true and configure the accountService to throw an exception when CreateAccount() is called. We verify that the method calls CreateAccount(), displays the appropriate alert for failure, and does not navigate to another page using GoToAsync(). By following those instructions, I had two working, useful unit tests, literally in seconds (well, a lot of seconds, but only because it took me a minute to read and understand (and verify) what it had done.) The first time I made the request, I forgot to say it in xUnit and it spits out the tests in nUnit. I then entered, Do it again using xUnit And it didn't hesitate, out came the code in xUnit along with modified instructions. Summary In sum, my pair-programming buddy, ChatGPT, saved me a lot of tedious work creating unit tests. I can attest that it works equally well on more complex logic. It doesn't always get it right the first time, but you can "tune" it with follow-up questions, and it remembers what it just did, so rather than repeating the entire command you can just say, as I do above, Do it again, but this time… We used to say, Coding by Google is not a problem as long as you know what you are doing. The problem only arose with very junior programmers who took what they found online as-is without checking that it made sense. The same can be true here: ChatGPT can get you going, and you can often use its code as-is, but you need to keep a cautious eye on it as it certainly can get things wrong.  Writing unit tests is a critical, though often overlooked part of writing code well. ChatGPT not only makes it relatively easy, but it also encourages creating tests because you can do so quickly. It is so good that I happily pay $20/month for a subscription entitling me to unrestricted use and access to the latest iteration.  Author Bio  Jesse Liberty is a full-time hands-on programmer, specializing in C#, git, and .NET MAUI. He hosts the popular Yet Another Podcast and is the author of more than a dozen best-selling programming books. Liberty is a Certified Xamarin Developer, a Xamarin MVP, and a Microsoft MVP. He was a Technical Evangelist for Microsoft, a Distinguished Software Engineer at AT&T; a Software Architect for PBS, and a Vice President of Information Technology at Citibank, and he was on the teaching staff at Brandeis University. Jesse is a recognized expert and has spoken at conferences worldwide.  Links - LinkedIn  .NET MAUI for C# Developers  
Read more
  • 0
  • 0
  • 10860
Modal Close icon
Modal Close icon