Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just don’t feel threatened by AI at all. Maybe I’m not seeing the full picture, but the quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff.

Programming is not just taking product requirements and spitting out the correct algorithms.



They are just trying to hire less programmers, because programmers are expensive. They will have even more business / system analysts than before. And they will still have to have programmers to piece stuff together and debug systems. They just think if they could have less programmers they would save money.


There's a reason the world doesn't run on low and no-code platforms. This is worse, because it still takes a programmer to verify and apply changes.


A shocking amount of the world runs on Microsoft Excel already.


You don't think AI can improve in those regards? I highly disagree, its not this decade or even the next, but I wouldn't be surprised if things begin changing sometime in the 2040s. A decade I will be nearing retirement in but still working.


The threat isn't just from AI, unfortunately, it's from an entire class of powerful people willing to throw their money behind AI instead of behind customer needs, business priorities, and human values.


And who will prompt the AI for these powerful people?


The few people that do the same thing now writing code from client specs or who explain and cut up client specs to their team of programmers. Not many people are good at that, and that won’t be AI for a while yet.


Indeed. I yearn for a time when I just got to pump out code. Actually writing code in quantity hasn't been a part of my job for years. Figuring out what to do is more of the job than how.

I have no doubt that machines can write a lot of the boilerplate code going forward. But that's just the evolution of tools. Actually building systems; composing them and keeping them running... this is a whole other art.


chatGPT is not a threat. ChatGPT is an omen for the threat that currently exists in the future.

A lot of people haven't really played with chatGPT in depth. This thing is next level. It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Here's a snippet from a chatGPT conversation I had based on your post:

   ME: Give more examples of squishy stuff that software developers need to handle. 

   chatGPT: 
   1. Ethical considerations: As technology advances, software developers may need to consider ethical implications of the products they create. For example, a developer working on a facial recognition system may need to consider issues of privacy and bias.

   2. User experience (UX) design: Creating a software that is easy to use and intuitive for the user is an important aspect of software development. Developers may need to consider factors such as layout, navigation, and accessibility to ensure that the software is user-friendly.

   3. Business strategy: Developers may need to consider the overall business strategy of the company they work for, and how their work contributes to achieving that strategy. This may involve working with other departments, such as sales or marketing, to ensure that the software is aligned with the company's goals.

   4. Security: Developers should consider security in their software development process to protect the software from hacking attempts, data breaches, and other security risks.

   5. Cultural and linguistic diversity: Developers must consider the cultural and linguistic diversity of their users, in order to create software that is inclusive, accessible and understandable for all users.

   6. Communication and collaboration: Developers often work in teams, and must be able to communicate effectively with other team members and stakeholders to ensure that the software is developed as intended.

   ME: Give example python code that serves some random customer in some random sector. Then explain how you would change that code to better serve the "squishy stuff" needed by the customer.  

   chatGPT: Here's an example of Python code that serves a random customer in the retail sector by calculating the total cost of a purchase including sales tax:

      def calculate_total_cost(price, sales_tax):
       total_cost = price + (price * sales_tax)
       return total_cost

      purchase_price = 100
      sales_tax_rate = 0.08

      total_cost = calculate_total_cost(purchase_price, sales_tax_rate)  
      print(total_cost)

   To better serve the "squishy stuff" needed by the customer, we could make the following changes:

   1. Add a function to calculate discounts based on customer loyalty or promotional offers, for example by adding a function that uses customer's ID as an argument.

   2. Add a function to handle different tax rates for different states or countries, for example by adding a function that uses the customer's address as an argument.

   3. Add a function to handle different currencies, for example by adding a function that uses the customer's preferred currency as an argument.

   4. Add a function to handle different languages, for example by adding a function that uses the customer's preferred language as an argument.

   Here is an example of how the code would look after making these changes:

      def calculate_total_cost(price, sales_tax, discount, customer_id):
       total_cost = price + (price * sales_tax)
       if customer_id in loyalty_customers:
        total_cost = total_cost - (total_cost * discount)
       return total_cost

      def tax_rate(customer_address):
       if customer_address in country_1:
        return 0.08
       elif customer_address in country_2:
        return 0.09
       else:
        return 0.07

      def currency_convert(price, customer_currency):
       if customer_currency == "USD":
        return price
       elif customer_currency == "EUR":
        return price * 0.8
       else:
        return price * 0.9

      def language_change(text, customer_language):
       if customer_language == "en":
        return text
       elif customer_language == "fr":
        return translate_to_french(text)
       else:
        return translate_to_spanish(text)
    
      purchase_price = 100
      customer_id = 123
      customer_address = "US"
      customer_currency = "USD"
      customer_language = "en"

      sales_tax_rate = tax_rate(customer_address)
      purchase_price = currency_convert(purchase_price, customer_currency)
      total_cost = calculate_total_cost(purchase_price, sales_tax_rate, 0.1, customer_id)
      text = f"Your total cost is: {total_cost} {customer_currency}"
      print(language_change(text, customer_language))

      This code now considers the customer's address, currency, language and ID, to provide the customer with a more personalized experience.
Honestly there's a good number of people who aren't getting how revolutionary chatGPT is because of all the other inconsequential AI buzz. You really need to play with this thing in depth. It is Creative. It can create it's own specs and implement those specs like I asked it too. It won't replace you, but it's definitely the precursor to something that will.


I had played around with GPT-3 beta and sort of written it off after I'd gave me a (honestly compelling though) lengthy argument that Eminem was a black man.

Earlier today I used ChatGPT to explain to me concepts I lacked understanding of, which allowed me to accurately interpret the statistical information in a scientific paper (I'm a reporter and got through a humanities degree with only an introductory stats class, like 10 years ago).

When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.


>When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.

If you asked say someone who failed highschool to do the same would it do better?

The jump from GPT-3 beta to chatGPT is an omen for the next jump and the jump after that.


Oh c'mon, is this what really impresses you?

> It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".

The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!

Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.

> Honestly there's a good number of people who aren't getting how revolutionary chatGPT

Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.

It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".


>Oh c'mon, is this what really impresses you?

You'll be impressed once the successor of chatGPT takes your job. You realize that chatGPT wasn't trained to be a programmer right? They did virtually nothing to make it a good programmer. It learned programming as a side effect. Wait till they make the thing targetted towards programming.

>The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

All of what you said is true yet you are blind if you can't see why it's revolutionary. In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements. It will do all of what you asked it to. And this is a tool that only learned how to program as a side effect. If you're not impressed, you're in denial.

>Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

No. I'm sorry but you're biased. Artists are ALREADY starting lawsuits because the art is on par with what they create. I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming. There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do. If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

>but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".

Of course. EVEN with supervision, chatGPT isn't even useful enough to be used in a professional environment. I AGREE. The successor to chatGPT, however will be. In fact one of the successors to chatGPT can replace a persons job. Nobody is afraid of chatGPT taking their job. They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

>Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

chatGPT says:

   Handling Personal Data: A software engineer at a healthcare company is tasked with implementing a system to store and process patient data. They use the programming language Python and the Django web framework to build the system. To ensure the privacy and security of personal data, the engineer implements industry-standard encryption techniques for data at rest and in transit, as well as implementing user authentication and authorization mechanisms. The engineer also conducts regular security audits and penetration testing to identify and address potential vulnerabilities.

   Ensuring Verifiability and Correctness: An engineer at a financial services company is responsible for building and maintaining a system that processes financial transactions. They use the programming language Java and the Spring framework to build the system. To ensure the verifiability and correctness of the system, the engineer implements automated unit tests, integration tests, and end-to-end tests to verify that the system behaves as expected. They also use code review tools like SonarQube and Crucible to ensure that the code is maintainable, readable and adheres to company coding standards.

   Productive Quality Assurance: An engineer at a e-commerce company is tasked with building and maintaining a system that processes online orders. They use the programming language Ruby and the Rails framework to build the system. To ensure the quality of the system, the engineer works closely with the QA team to establish a test-driven development process and use tools like Selenium and Cucumber to automate functional and acceptance testing. They also use performance testing tools like Apache JMeter to ensure the system can handle high traffic loads.

   Robust and Scalable Systems Architecture: An engineer at a social media company is responsible for building and maintaining a system that can handle millions of users and handle high traffic loads. They use the programming language Go and the Microservices architecture to build the system. To ensure the system is robust and scalable, the engineer designs the system to be highly available and fault-tolerant. They use load balancers like HAProxy and Nginx to distribute traffic across multiple servers and use service discovery tools like Consul and Etcd to automatically route requests to the appropriate service.

   Managing Complexity: An engineer at a logistics company is tasked with building and maintaining a system that tracks packages and shipments as they move through the supply chain. They use the programming language Python and the Flask web framework to build the system. To manage complexity, the engineer uses a microservices architecture to break the system into smaller, more manageable pieces. They also use containerization technology like Docker to package and deploy the system, which makes it easier to manage and scale. They also use monitoring tools like Prometheus and Grafana to track the system performance and identify potential issues.

   Observability: An engineer at a SaaS company is responsible for building and maintaining a system that provides analytics and reporting features to customers. They use the programming language Node.js and the Express.js framework to build the system. To ensure observability, the engineer uses logging and monitoring tools like ELK stack, Grafana and Prometheus to track system performance and identify potential issues. They also use distributed tracing tools like Zipkin and Jaeger to track request and response times across multiple services.

   Productive documentation and self-documenting approaches to work: An engineer at a cloud service provider is tasked with building and maintaining a system that automates the provisioning and management of cloud resources. They use the programming language Python and the Flask web framework to build the system. To ensure productive documentation, the engineer uses a documentation-as-code approach and stores all documentation in the same repository as the code. They use tools like Sphinx and Doxygen to automatically generate API documentation from code comments. They also automatically generate documentation from code comments and maintain the documentation up-to-date. They also use self-documenting approaches like using clear and descriptive variable names, function names and comments in the code, making it easy for other developers to understand the codebase. They also use a wiki system like Confluence to store additional information, such as design decisions, meeting notes, and user guides.
Literally. You want to drill down on more squishy stuff? The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff. It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.


> You'll be impressed once the successor of chatGPT takes your job.

Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

It would totally make the coding-in part faster (just as IDE suggestion do), but this was always the brainless tedious manual labour part.

> You realize that chatGPT wasn't trained to be a programmer right?

I realize that neural networks are unable to generate correct formal (where each minor detail has specific and important meaning) descriptions by design.

Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

> All of what you said is true yet you are blind if you can't see why it's revolutionary

I never said it's not revolutionary. I merely point out its hard limits.

> In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements.

Sure, you can specify every minor detail: how the data should flow, which patterns should be used, which things should be pulled from configs, how the interfaces should be structured, and a shit load of negative prompts. But that's the details that only the domain expert would know. And again, there are no guarantees that the result would actually be correct: the expert will have to review all this extra-attentively, cuz there is no chance that expert's assumptions are the same as NN's "assumptions".

So you basically still need a domain expert, who now has to do extra (guess)work, instead of just writing a formal description directly in code. What's the profit then?

> Artists are ALREADY starting lawsuits because the art is on par with what they create

Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

AI artists actually typically do shit load of prompt-enginerring, pipe different parts of image through different NN's (appropriate to specific situation) and do a lot of manual post-processing so the result looks good.

> I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming

These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

> There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do

Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

> If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

Don't kid yourself thinking that these two are similar or comparable sets of tasks.

> chatGPT isn't even useful enough to be used in a professional environment. I AGREE.

That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

> The successor to chatGPT, however will be. > They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

> chatGPT says:

"Handling Personal Data" - somewhat scratches the surface, but it doesn't mention actual problematics (that first and foremost it's a regulatory matter and all the specifics stem directly from it).

"Ensuring Verifiability and Correctness" - clearly confuses runtime and compiler properties with quality assurance, way off.

"Productive Quality Assurance" - didn't understand the productivity issue (to test or not to test) and even if we drop "productive" part, the process it describes is also incorrect: engineers don't really ever work with the QA team in order to establish TDD.

"Robust and Scalable Systems Architecture" - way off, while you'll often see service discovery, nginx, HAProxy, etc in scalable systems, that's not what makes scalability. Properly managing state and persistence in appropriate places does.

"Managing Complexity" - way off. I don't suppose this one even requires an explanation, total gibberish.

"Observability" - as expected, this is a rather good one. Unlike other points (which are concepts/problems) - this one is a rather well defined term.

"Productive documentation and self-documenting approaches to work" - totally ignored "productive" part and just gave a definition of "self-documenting" along with some rhetorics on the fact that people document stuff in general.

Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

Basically, even if you ignore "brain-farts" (which is a good example of "minor" incorrect details that make NNs inappropriate tool for complex formal stuff) it only really got - AT BEST - 2-3/7 right. Now, imagine it's a real world and you are betting millions on it, without having an expert-overseer to tell you when it brain-farts or if the output is even remotely correct.

Actually, what was the prompt? Seems like you just asked it to describe the list I gave you, which essentially means you just used my own expertise, understanding and creativity, not GPT's, as it didn't even give you a list of concrete problems.

> The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff.

Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

> It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.

Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.


>Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

All of it. Only one human leader to write queries. Everything else designed by an AI.

>Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?

>I never said it's not revolutionary. I merely point out its hard limits.

And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.

>Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.

https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.

>These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.

>Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

Not true. A good amount do.

>Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.

>Don't kid yourself thinking that these two are similar or comparable sets of tasks.

Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.

>That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.

>The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.

>Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.

>Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.

>Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.

You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.


It's really good a boiling things down to the basics that's for sure, but it's very very basic isn't it?


   ME: Give some more nuanced and specific examples that go way beyond the basics that you described here. 

   chatGPT: 

   1. Ethical considerations:
      For example, a developer working on an AI-based medical diagnostic tool may need to consider issues of bias in the training data, which could lead to inaccurate or unfair diagnoses for certain populations. To mitigate this, the developer could use techniques such as debiasing algorithms or using diverse training data to minimize bias in the model.
      Another example, a developer working on a chatbot that serves as a mental health support system may need to consider issues of confidentiality and ensuring the chatbot is able to identify and handle potentially dangerous situations, such as suicidal thoughts. To mitigate this, the developer could include a system for flagging and escalating conversations that contain concerning content and provide users with appropriate resources.
   2. User experience (UX) design:
      For example, a developer working on a mobile app that tracks fitness goals may need to consider issues of motivation and engagement to keep users interested in using the app. To mitigate this, the developer could use gamification techniques, such as leaderboards, rewards, and progress tracking, to keep users motivated and engaged.
      Another example, a developer working on an e-commerce platform may need to consider issues of trust and security to ensure users feel comfortable making purchases on the platform. To mitigate this, the developer could implement a robust security system, such as SSL encryption and two-factor authentication, and display trust badges and customer reviews prominently on the platform.
   3. Business strategy:
      For example, a developer working on a SaaS platform may need to consider issues of scalability and cost-effectiveness to ensure the platform is able to handle a large number of users and minimize costs. To mitigate this, the developer could use cloud computing services, such as AWS or Azure, to handle the platform's infrastructure and use auto-scaling to adjust the number of resources used as needed.
      Another example, a developer working on a platform that integrates with other third-party platforms may need to consider issues of API compatibility and security to ensure that the platform can integrate seamlessly with other platforms and protect users' data. To mitigate this, the developer could use API documentation and testing tools to ensure compatibility, and use API keys and OAuth for secure authentication.
   4. Security:
      For example, a developer working on a platform that handles sensitive data such as financial transactions may need to consider issues of fraud detection and prevention. To mitigate this, the developer could use machine learning techniques to identify and prevent fraudulent activities, such as anomaly detection, and use encryption and secure protocols for data transmission.
      Another example, a developer working on an IoT device may need to consider issues of device security and protecting the device from hacking attempts. To mitigate this, the developer could use secure boot process and secure firmware updates, use strong and unique passwords and encryption, and conduct regular security assessments.
   5. Cultural and linguistic diversity:
      For example, a developer working on a social media platform may need to consider issues of cultural sensitivity and ensuring that the platform is inclusive for users of all cultures. To mitigate this, the developer could use machine learning to automatically detect and remove content that is culturally insensitive, or use natural language processing to detect and flag hate speech.
      Another example, a developer working on a platform that serves users with disabilities may need to consider issues of accessibility and ensuring that the platform is usable for users with disabilities. To mitigate this, the developer could use accessibility guidelines and testing tools to ensure that the platform is compliant with accessibility standards and implement features such as screen readers, text-to-speech, and keyboard navigation.
   6. Communication and collaboration:
      For example, a developer working on a remote team may need to consider issues of communication and ensuring that everyone is on the same page. To mitigate this, the developer could use collaboration tools such as Slack, Trello, or Asana to keep everyone connected and informed about the development process. Additionally, the developer could use video conferencing tools such as Zoom or Google Meet to hold regular team meetings and keep everyone connected.
      Another example, a developer working on a project with a tight deadline may need to consider issues of time management and ensuring that the project is completed on time. To mitigate this, the developer could use project management tools such as Jira or Basecamp to track progress and ensure that everyone is on track to meet the deadline. Additionally, the developer could use agile methodologies such as Scrum or Kanban to manage the development process and ensure that the project is completed on time.
If you want you can even take an example drill it down in more detail. You can twist an example any way you want. How would you use a certain specific example to handle the Hulk as a customer? What's the best way to do it?


Just for kicks:

   ME: What if the Hulk was one of your customers? How would you handle him? Give specific and detailed examples. 
   
   chatGPT: 
   As a language model AI, I do not have any customers or clients, but I can provide you with an example of how a software development team might handle a customer like the Hulk, who is known for his incredible strength and rage.

   1. Communication and collaboration:
      If the Hulk is a customer, it's important to ensure that the team is able to communicate effectively with him and understand his needs. This could involve using clear and simple language when communicating with him, and providing him with detailed documentation and guides to help him understand the software being developed.
      Additionally, it could also be important to have a dedicated point of contact for the Hulk, such as a project manager, who can serve as the primary liaison between the team and the Hulk. This person would be responsible for keeping the Hulk informed about the project's progress and addressing any concerns he may have.
   2. Safety and Security:
      As Hulk is known for his incredible strength and rage, it's important to ensure that the team's safety is a top priority. This could involve implementing security protocols such as security cameras and alarms, and having security personnel on site when the Hulk is visiting the office.
      Additionally, it could also be important to have a plan in place to respond to any incidents that may occur, such as a protocol for evacuating the office in case of an emergency.
   3. Technical Capabilities:
      The Hulk has a specific set of abilities that are different from other customers, it's important to consider how the software will handle those abilities. For example, if the software is an application that tracks Hulk's strength, the team may need to make sure that the software can handle the high levels of data and ensure that the data recorded is accurate.
   4. Empathy and Understanding:
      It's important to understand and empathize with the Hulk's unique situation, such as his abilities and limitations, and how those factors may affect his interactions with the software. This could involve conducting user research with the Hulk to gain a better understanding of his needs and how the software can be designed to best meet those needs.
   
   It's important to keep in mind that the Hulk is a fictional character, and in real life, the safety and well-being of the team should always be the top priority. The above examples are purely fictional and should not be applied in real life situations.
Anyway... I think my point is unmistakably clear.

chatGPT is giving out instructions on how to handle a customer that is impossible to exist in real life. If you're thinking this thing is some generic text generator think again.

Everything before chatGPT was an interesting toy. Something changed with chatGPT.


It’s good at generating bullshit text, that depending on topic/training data may not be as bullshitty. Still, every single relation between the sentences is questionable. Try asking it some basic ass math questions and it fails spectacularly (with huge confidence, none the less)


Many humans suck ass at math too if they aren't taught it at all. chatGPT wasn't trained specifically on math but it can be. I don't see why it needs to be though given that computers already excel beyond humans at math.

Read this and tell me whether it's similar text: https://www.engraved.blog/building-a-virtual-machine-inside/

I cannot fathom how someone can read that and think that chatGPT is just making text that is coincidentally not bullshitty.


Ok, it also breaks down quickly at simply repeatedly applying a simple rule (it has no problem applying it once). And it is pretty much an inherent problem with language models, no scaling up will help with it.

And this is pretty much a must for proper thinking, you have to be able to follow the train of thought.


Relax. It's not as smart as a human, ok? It does have issues there's no denying this.

But it is impressively smarter then you thought. You didn't dig into this thing deeply enough, and you dismissed as if it was some other generic text generator. It clearly is not. This is a milestone in AI technology.

You also have to realize this AI was NOT TRAINED specifically to understand the concept of code, virtual machines or any of this. It KNOWS these things as a SIDE EFFECT. It is also a nascent technology. It is the first prototype and there will be TONS of improvements on top of it.

This wasn't GPT-3. This is different. I hope you realize this and investigate further because a lot of people just skimmed the headlines and dismissed it as another GPT-3.


This is at best "first year student sat down for an hour to hand in a half baked homework" level. We seem to have very different ideas of what "creativity" means.


You seem to not realize how big the the gap between zero intelligence and first year student is.

I can tell you, that gap is vast, vastly bigger then half baked homework to A level paper.

But this is besides the point. If you've been following the news. ChatGPT generated papers are already a huge problem in schools. It's already beyond half baked.


It recently generated a response with incorrect English grammar. I asked if it could find the grammatical error in its response. It did, and corrected it.


Any other examples of what you mean by creative


https://www.engraved.blog/building-a-virtual-machine-inside/

This article was "holy shit" all the way through and all the way to the "holy shit" ending. Read to the end.

We've just passed a technological inflection point in AI. Not everyone fully noticed.


It’s a cool search engine. Nothing more.


It is a language model disguised as an answer engine, not a search engine. Its design does not allow it to link back to sources.


It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

That's an absolutely massive claim but I think it's a false one.

I've seen it be creative, but how good / useful that creativity is seems to be somewhat quite the roll of a dice. This is opposed to someone / something being intentionally creative.

This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

For example, I re-asked it your original question and it says this is what developers do:

    User input: Developers need to anticipate and handle unexpected or invalid input from users, such as special characters, empty fields, or out-of-range values.

    Randomness: Developers may need to incorporate randomness into their programs, such as generating random numbers or shuffling a deck of cards.

    Concurrency: Developers may need to handle concurrent access to shared resources, such as preventing race conditions or deadlocks.

    Network communication: Developers may need to handle network communication, such as dealing with lost or delayed packets, or handling different types of connections (e.g. HTTP, WebSockets).
    Time: Developers may need to handle time-related issues, such as scheduling tasks, handling time zones, or dealing with clock drift.

    Human language: Developers may need to handle natural language processing tasks, such as understanding speech, identifying sentiment, or handling different languages.

    Image and video processing: Developers may need to handle image and video processing tasks, such as object detection, image recognition, and video compression.
I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I feel personally, the more I play with these systems, the more predictable they become, and this isn't something "truly creative" would be.

Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.


I only pasted a snippet of the conversation. Before my initial query I asked chatGPT directly this:

   Quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff. How would you handle customers in this case. Can you provide an example?
And then after it responded then I queried it again for more detailed examples. I didn't post the initial query or the initial answer because it would just make everything too long. You're likely asking it a different and more general question.

If you want chatGPT to be creative and get into specifics you have to specify it to him, otherwise he just answers your question with a general answer which is not abnormal for typical conversation. Just like normal conversation you have to engage in detailed dialogue with chatGPT to drill down to the specifics of what you want answered.

>That's an absolutely massive claim but I think it's a false one.

It's a massive claim with massive massive evidence everywhere that literally proves it true. There are tons and tons of examples of chatGPT being highly creative. Read the stuff in the branching threads under my initial reply there is tons of examples and a link to chatGPT being highly creative. It is DEFINITIVE evidence.

>I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

I work with developers who do ML. I myself do video processing and object detection as a developer. You must be in web. I'm in embedded systems with vision. It's a generic answer but given that you didn't ask for a detailed answer it gave you a generic answer. Like a human it won't go into nuance until you ask them about nuance.

>This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

This is wrong. Are humans creative 100% of the time? If a human gives me one answer to a question that isn't creative is that human suddenly not creative? If you ask a generic question it will give you a generic answer.

>It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I see what you're saying here. You "feel" that the more entropy you inject the more incorrect answer it will come up with. Don't go off "feeling". Try it.

I already took your question and replaced the customers with hulk. I can add in Wednesday Adams as one of the developers. I can take this entire question and turn it on it's head to ask what if the client asked me to write a program to destroy the universe. You can take this thing all the way to destroying the universe and into the new universe that comes after it. chatGPT is game and will follow you.

Look I posted this in a branching thread I'll post it here because it's literally unequivocal evidence: https://www.engraved.blog/building-a-virtual-machine-inside/ read it till the end because the ending is creative inception.

There is no way chatGPT was trained specifically on any of the above. Yet it knows how to imagine all of that.

>Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.

No it won't be perfect. But neither are you. I think what AI can be, is it can be as good if not better than you. Being better than human doesn't mean it's right on everything. It will get things wrong sometimes, but I believe in the near future it will get things wrong less times then a human.


I get the vibe you really like ChatGPT :) I'm not in web either.

Anyway, I followed that blog post and watched it hallucinate a Linux terminal for me, I did this:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this

Then ran: rm -rf /

and got this:

``` I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with? ```

I don't know if I would call this amazing, it's actually wrong because it wouldn't harm my system.

It's a cool trick to watch it generate fake terminal output, but, I'm honestly just not that impressed?


>I get the vibe you really like ChatGPT :) I'm not in web either.

No it's not a "like" thing. Not at all. I neither like it nor hate it. I know you think I'm biased here. But it's actually the other way around. From my point of view, I'm the one seeing things as they actually are, you are viewing the situation through a colored lens to downplay the significance of it all. Perhaps out of subconscious fear or something else I don't know?

Let me illustrate the situation for you. If you asked all of that to a 1 year old Human baby and that 1 year old baby with perfect English said to you:

   I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with?
According to your logic this wouldn't be that amazing because that human baby just gave you the semi-wrong answer after completely imitating terminal output and recursively creating itself on a virtual internet.

Yeah not impressed with a baby let alone a binary file.

Do you see the disconnect here? You are downplaying the situation. Many people like you are. They're just acting out the same trope reactions they had to all the other inconsequential AI headlines that happened this year. Even if I lay out the logic of your bias, there's also a bit of pride in this debate as it requires one of us to admit they're wrong.

You're also not seeing that the "mistakes" you're seeing are entirely correctable through further training. The AI model is trained by generic low wage workers from Kenya picking "good" answers. Wrong answers that look right definitely get past the Kenyans. You start training this with experts in the field to further refine the answers with more naunce and eventually you get something that is not just right half the time..., but right almost all the time.


No, I just don't find it unreasonable that it responds that way.

It's statistics and it's not really "inventing" anything and it's not magic either and it's not hard to conceive why we get that answer. Again not trying to downplay it, it's a cool technology, It's just if you go calmly and think about it, it's fairly reasonable.

If I search the internet, I'd see many, many people talking about the dangers of running `rm -rf /`, I also see examples of other errors ChatGPT has returned when that command was entered, such as "bash not found", which is also what others have experienced and I can find examples of that on the internet.

So basically the response I was given would be weighted very highly on ChatGTPs list of things to do / say, based on the training data, or a 5 minute Google search.

I'm not anti-ai, or ChatGPT, I'm just not into believing in magic when it comes to LLMs.

If ChatGPT was magic, and didn't return that response, would it be useful ?


There's no magic here. There's no claim that chatGPT is useful either. The claim is that chatGPT is a precursor to a technology that will replace many human jobs because it's showing tell tale signs of intelligence that is quickly catching up to human intelligence.

I don't know why you're so focused on the rm -ff / thing. Like it literally just demonstrated awareness of a terminal shell, a filesystem, awareness of the internet, of apis and finally self awareness.

That is a herald for the future. This is despite all the rough imperfections chatGPT has. You are downplaying it. I'm sorry.

The whole statistical thing is a misnomer. If the output of chatGPT can be modelled as a statistical phenomenon then so can the output of the human brain. It's a distraction. What is going on here is indeed creativity, awareness and imagination, if you want to call it a statistical phenomenon that's fine, you're just saying all of intelligence can be modeled as a statistical phenomenon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: