{"id":1254,"date":"2021-01-04T13:37:33","date_gmt":"2021-01-04T18:37:33","guid":{"rendered":"https:\/\/www.pwvconsultants.com\/blog\/?p=1254"},"modified":"2021-01-04T13:37:36","modified_gmt":"2021-01-04T18:37:36","slug":"large-language-models-can-leak-private-data","status":"publish","type":"post","link":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/","title":{"rendered":"Large Language Models Can Leak Private Data"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"h-a-recent-study-shows-that-large-language-models-trained-on-public-data-can-be-prompted-to-provide-sensitive-information-if-fed-the-right-words-and-phrases-and-the-study-is-just-the-tip-of-the-iceberg\">A recent study shows that large language models trained on public data can be prompted to provide sensitive information if fed the right words and phrases. And the study is just the tip of the iceberg.<\/h2>\n\n\n\n<p>It has already been established that there is bias in artificial intelligence. Specific words are associated with women (smile) versus men (official), negative connotations are often seen in reference to people of color versus their caucasian counterparts, etc. That\u2019s not new and definitely needs to be addressed, but a recent study shows that large language models which are trained on public data can memorize and leak private information. Both of these go hand-in-hand, and as AI becomes more prominent and widely used, both problems need to be addressed.<\/p>\n\n\n\n<p>The <a rel=\"noreferrer noopener\" href=\"https:\/\/ai.googleblog.com\/2020\/12\/privacy-considerations-in-large.html\" target=\"_blank\">study regarding data leakage<\/a> was conducted by Google, Apple, Stanford University, OpenAI, the University of California, Berkeley, and Northeastern University. <a rel=\"noreferrer noopener\" href=\"https:\/\/venturebeat.com\/2020\/12\/16\/google-apple-and-others-show-large-language-models-trained-on-public-data-expose-personal-information\/\" target=\"_blank\">According to Venture Beat<\/a>, \u201cThe researchers report that, of 1,800 snippets from GPT-2, they extracted more than 600 that were memorized from the training data. The examples covered a range of content including news headlines, log messages, JavaScript code, personally identifiable information, and more. Many appeared only infrequently in the training dataset, but the model learned them anyway, perhaps because the originating documents contained multiple instances of the examples.\u201d<\/p>\n\n\n\n<p>The collaboration looked specifically at GPT-2, which has 124 million parameters compared with the more popular GPT-3 which has 175 billion parameters. The problem is, the research also found that the larger language models memorize training data more easily that their smaller counterparts. For instance, an experiment showed that GPT-2 XL (1.5 billion parameters) memorized 10 times more information than the smaller GPT-2. The implications for the larger models is huge. GPT-3 is available through an API, publicly.<\/p>\n\n\n\n<p>Microsoft\u2019s Turing Natural Language Generation Model is used in several Azure services and contains 17 billion parameters. Facebook\u2019s translation model has over 12 billion parameters. These are the larger language models the study references, these models not only memorize their training data, but because they do that, they can leak sensitive information.<\/p>\n\n\n\n<p>\u201cLanguage models continue to demonstrate great utility and flexibility \u2014 yet, like all innovations, they can also pose risks. Developing them responsibly means proactively identifying those risks and developing ways to mitigate them,\u201d Google research scientist Nicholas Carlini wrote in a blog post. \u201cGiven that the research community has already trained models 10 to 100 times larger, this means that as time goes by, more work will be required to monitor and mitigate this problem in increasingly large language models \u2026 The fact that these attacks are possible has important consequences for the future of machine learning research using these types of models.\u201d<\/p>\n\n\n\n<p>Not only do these models require intense monitoring and having mitigation protocols in place, but there needs to be tight security around anything they touch. Since they are trained on public data, it is important for businesses and consumers to limit the information that is shared publicly. These models are trained using billions of web-based examples like ebooks, social media platforms and more. Using this information, they work to complete sentences or paragraphs. This is the information they memorize, the information that can be \u201cleaked\u201d when prompted correctly.<\/p>\n\n\n\n<p>Imagine the implications of a bad actor using this technology to guess credentials of employees at a company, credentials they will first test on personal accounts to find the exact matches before then attempting those credentials at their place of work. If the credentials match, as many people tend to do, now a bad actor is inside a business system and no one knows they aren\u2019t supposed to be there. They can wreak havoc by implanting malware to steal information, cyrptomine compute, hold systems hostage or simply bring the business to a standstill.&nbsp;<\/p>\n\n\n\n<p>With security on the minds of everyone in business, this news puts everyone on high alert. These language models are used in many applications we use every day, and hackers now have another avenue to exploit, if they haven\u2019t started already. If you haven\u2019t done <a href=\"https:\/\/www.pwvconsultants.com\/blog\/businesses-need-to-review-security-amid-pandemic\/\" target=\"_blank\" rel=\"noreferrer noopener\">a security review recently<\/a>, now is the time. Bring in an expert, consult your team and start fixing weak spots now. The more secure you make your business in this ever-expanding digital realm, the more likely you are to survive when an attack inevitably winds up at your door.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.<\/p>\n","protected":false},"author":1,"featured_media":1260,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[1965,1966,14],"tags":[1885,1886,1981,2033,2034,1888,1889,2032,17,2035],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.2 (Yoast SEO v22.2) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Large Language Models Can Leak Private Data - PWV Consultants<\/title>\n<meta name=\"description\" content=\"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Large Language Models Can Leak Private Data\" \/>\n<meta property=\"og:description\" content=\"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\" \/>\n<meta property=\"og:site_name\" content=\"PWV Consultants\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/PWV-Consultants-110444033947964\" \/>\n<meta property=\"article:published_time\" content=\"2021-01-04T18:37:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-01-04T18:37:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1285\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Pieter VanIperen\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@PWV_Consultants\" \/>\n<meta name=\"twitter:site\" content=\"@PWV_Consultants\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Pieter VanIperen\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\"},\"author\":{\"name\":\"Pieter VanIperen\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/c15d5d40126a8ad906cb3067de95f8d4\"},\"headline\":\"Large Language Models Can Leak Private Data\",\"datePublished\":\"2021-01-04T18:37:33+00:00\",\"dateModified\":\"2021-01-04T18:37:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\"},\"wordCount\":724,\"publisher\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg\",\"keywords\":[\"AI\",\"Artificial Intelligence\",\"data leak\",\"language models\",\"large language models\",\"Machine Learning\",\"ML\",\"programming languages\",\"Security\",\"training data\"],\"articleSection\":[\"Artificial Intelligence\",\"Machine Learning\",\"Security\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\",\"url\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\",\"name\":\"Large Language Models Can Leak Private Data - PWV Consultants\",\"isPartOf\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg\",\"datePublished\":\"2021-01-04T18:37:33+00:00\",\"dateModified\":\"2021-01-04T18:37:36+00:00\",\"description\":\"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage\",\"url\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg\",\"contentUrl\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg\",\"width\":1920,\"height\":1285,\"caption\":\"Image by Gerd Altmann from Pixabay\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.pwvconsultants.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Large Language Models Can Leak Private Data\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#website\",\"url\":\"https:\/\/www.pwvconsultants.com\/blog\/\",\"name\":\"PWV Consultants\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.pwvconsultants.com\/blog\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#organization\",\"name\":\"PWV Consultants\",\"url\":\"https:\/\/www.pwvconsultants.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/04\/logo-alternate-e1585773530392.png\",\"contentUrl\":\"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/04\/logo-alternate-e1585773530392.png\",\"width\":98,\"height\":84,\"caption\":\"PWV Consultants\"},\"image\":{\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/PWV-Consultants-110444033947964\",\"https:\/\/twitter.com\/PWV_Consultants\",\"https:\/\/www.linkedin.com\/company\/pwv-consultants\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/c15d5d40126a8ad906cb3067de95f8d4\",\"name\":\"Pieter VanIperen\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/8b294918257a810803e2befc9a71b7bc?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/8b294918257a810803e2befc9a71b7bc?s=96&d=mm&r=g\",\"caption\":\"Pieter VanIperen\"},\"description\":\"PWV Consultants is a boutique group of industry leaders and influencers from the digital tech, security and design industries that acts as trusted technical partners for many Fortune 500 companies, high-visibility startups, universities, defense agencies, and NGOs. Founded by 20-year software engineering veterans, who have founded or co-founder several companies. PWV experts act as a trusted advisors and mentors to numerous early stage startups, and have held the titles of software and software security executive, consultant and professor. PWV's expert consulting and advisory work spans several high impact industries in finance, media, medical tech, and defense contracting. PWV's founding experts also authored the highly influential precursor HAZL (jADE) programming language.\",\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/pwv-consultants\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Large Language Models Can Leak Private Data - PWV Consultants","description":"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/","og_locale":"en_US","og_type":"article","og_title":"Large Language Models Can Leak Private Data","og_description":"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.","og_url":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/","og_site_name":"PWV Consultants","article_publisher":"https:\/\/www.facebook.com\/PWV-Consultants-110444033947964","article_published_time":"2021-01-04T18:37:33+00:00","article_modified_time":"2021-01-04T18:37:36+00:00","og_image":[{"width":1920,"height":1285,"url":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg","type":"image\/jpeg"}],"author":"Pieter VanIperen","twitter_card":"summary_large_image","twitter_creator":"@PWV_Consultants","twitter_site":"@PWV_Consultants","twitter_misc":{"Written by":"Pieter VanIperen","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#article","isPartOf":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/"},"author":{"name":"Pieter VanIperen","@id":"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/c15d5d40126a8ad906cb3067de95f8d4"},"headline":"Large Language Models Can Leak Private Data","datePublished":"2021-01-04T18:37:33+00:00","dateModified":"2021-01-04T18:37:36+00:00","mainEntityOfPage":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/"},"wordCount":724,"publisher":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage"},"thumbnailUrl":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg","keywords":["AI","Artificial Intelligence","data leak","language models","large language models","Machine Learning","ML","programming languages","Security","training data"],"articleSection":["Artificial Intelligence","Machine Learning","Security"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/","url":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/","name":"Large Language Models Can Leak Private Data - PWV Consultants","isPartOf":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage"},"image":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage"},"thumbnailUrl":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg","datePublished":"2021-01-04T18:37:33+00:00","dateModified":"2021-01-04T18:37:36+00:00","description":"A recent study shows that large language models more easily memorize training data. Which means they are prone to leaking sensitive data.","breadcrumb":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#primaryimage","url":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg","contentUrl":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/12\/artificial-intelligence-2167835_1920.jpg","width":1920,"height":1285,"caption":"Image by Gerd Altmann from Pixabay"},{"@type":"BreadcrumbList","@id":"https:\/\/www.pwvconsultants.com\/blog\/large-language-models-can-leak-private-data\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.pwvconsultants.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Large Language Models Can Leak Private Data"}]},{"@type":"WebSite","@id":"https:\/\/www.pwvconsultants.com\/blog\/#website","url":"https:\/\/www.pwvconsultants.com\/blog\/","name":"PWV Consultants","description":"","publisher":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.pwvconsultants.com\/blog\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.pwvconsultants.com\/blog\/#organization","name":"PWV Consultants","url":"https:\/\/www.pwvconsultants.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/04\/logo-alternate-e1585773530392.png","contentUrl":"https:\/\/www.pwvconsultants.com\/blog\/wp-content\/uploads\/2020\/04\/logo-alternate-e1585773530392.png","width":98,"height":84,"caption":"PWV Consultants"},"image":{"@id":"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/PWV-Consultants-110444033947964","https:\/\/twitter.com\/PWV_Consultants","https:\/\/www.linkedin.com\/company\/pwv-consultants"]},{"@type":"Person","@id":"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/c15d5d40126a8ad906cb3067de95f8d4","name":"Pieter VanIperen","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.pwvconsultants.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/8b294918257a810803e2befc9a71b7bc?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/8b294918257a810803e2befc9a71b7bc?s=96&d=mm&r=g","caption":"Pieter VanIperen"},"description":"PWV Consultants is a boutique group of industry leaders and influencers from the digital tech, security and design industries that acts as trusted technical partners for many Fortune 500 companies, high-visibility startups, universities, defense agencies, and NGOs. Founded by 20-year software engineering veterans, who have founded or co-founder several companies. PWV experts act as a trusted advisors and mentors to numerous early stage startups, and have held the titles of software and software security executive, consultant and professor. PWV's expert consulting and advisory work spans several high impact industries in finance, media, medical tech, and defense contracting. PWV's founding experts also authored the highly influential precursor HAZL (jADE) programming language.","sameAs":["https:\/\/www.linkedin.com\/company\/pwv-consultants"]}]}},"_links":{"self":[{"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/posts\/1254"}],"collection":[{"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/comments?post=1254"}],"version-history":[{"count":3,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/posts\/1254\/revisions"}],"predecessor-version":[{"id":1261,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/posts\/1254\/revisions\/1261"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/media\/1260"}],"wp:attachment":[{"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/media?parent=1254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/categories?post=1254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pwvconsultants.com\/blog\/wp-json\/wp\/v2\/tags?post=1254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}