Monday, September 4, 2023

The Unintended Benefit of AI Bias in Educating People About Social Biases

 I gave Bard the following command:

Write an article that talks about the fact that the biases we see coming out of AIs have an unintended benefit of educating people about social biases and is garnering more conversation around bias and in turn addressing the negative aspects of bias.

And this is mostly what Bard responded with. Some minor tweaks from me.

One of the unintended benefits of AI bias is that it is raising awareness of social biases. When people see AI systems making biased decisions, it can help them to understand how these biases exist in the real world. This can lead to more conversations about bias and how to address it.

For example, in 2019, a facial recognition algorithm was found to be more accurate at identifying white people than black people. This revelation sparked a lot of discussion about the role of race in facial recognition technology.

Another example is the case of COMPAS, a risk assessment tool used by judges to determine whether defendants should be released from jail before their trial. COMPAS was found to be biased against black defendants, who were more likely to be classified as high-risk and denied bail. This led to calls for reforms to the criminal justice system.

The unintended benefit of AI bias is not always positive. In some cases, it can reinforce existing biases and make it more difficult to address them. However, overall, it is clear that AI bias is having a positive impact on raising awareness of social biases. This is an important step towards addressing these biases and creating a more just and equitable society.

Here are some ways to address the negative aspects of AI bias:

Use more diverse data sets: AI systems are trained on data, so it is important to use data that is as diverse as possible. This will help to reduce the likelihood that AI systems will learn to reflect biases that are present in the data.

Use fairer algorithms: There are a number of algorithms that are designed to be fairer than traditional algorithms. These algorithms can be used to reduce the impact of bias in AI systems.

Be transparent about the data and algorithms: It is important to be transparent about the data and algorithms that are used to train AI systems. This will help to make it easier to identify and address any biases that may be present.

Encourage human oversight: AI systems should not be used to make decisions without human oversight. This will help to ensure that any biases that are present in the AI system are not allowed to have a negative impact on people's lives.

By taking these steps, we can help to ensure that AI is used for good and that it does not perpetuate the negative aspects of social bias.

Tuesday, August 15, 2023

The Parallels between the evolution of self driving cars and generative AI

Generative AI and driver-less cars are two technologies that are rapidly evolving and have the potential to change the world in a major way. Both technologies rely on artificial intelligence (AI) to perform complex tasks, but they do so in very different ways.

Generative AI is a type of AI that can create new content, such as images, text, and music. It does this by learning from existing data and then using that data to generate new variations.

Self driving cars are vehicles that can navigate roads and highways without the need for a human driver. They use a variety of sensors, such as cameras, radar, and lidar, to collect data about their surroundings. This data is then used by AI algorithms to make decisions about where to go and how to avoid obstacles.

The generative AI industry and the driver-less car industry have both been growing rapidly in recent years. Here is a timeline of some of the key milestones in each industry:

Generative AI

  • 2012: Generative adversarial networks (GANs) are first introduced. GANs are a type of generative AI that have been used to generate realistic images, text, and music.
  • 2014: DeepDream is released. DeepDream is a software program that uses neural networks to generate psychedelic images from regular photos.
  • 2017: OpenAI Five defeats a team of professional Dota 2 players. This is a major milestone for generative AI, as it shows that AI can be used to create systems that can outperform humans in complex tasks.
  • 2020: Nvidia releases StyleGAN2. StyleGAN2 is a generative AI model that can generate photorealistic images of people, animals, and objects.
  • 2023: Google AI releases Imagen. Imagen is a generative AI model that can generate images that are indistinguishable from real photos.

Driver-less Cars

  • 2004: The first self-driving car is built by Stanford University. This car is able to navigate a small course without human input.
  • 2010: Google begins testing self-driving cars on public roads.
  • 2014: Uber launches a self-driving car pilot program in Pittsburgh.
  • 2016: Tesla releases its Autopilot feature, which allows cars to drive themselves on highways.
  • 2020: Waymo launches a commercial self-driving car service in Phoenix, Arizona.
  • 2023: Several major automakers announce plans to release self-driving cars in the next few years.

Both the generative AI industry and the driverless car industry are rapidly evolving. It is still too early to say when either technology will become mainstream. It is clear that they have the potential to change the world in a major way.

One of the most exciting things about generative AI is its potential to be used to create realistic simulations of driving situations. This could be used to train driver-less cars to be more safe and efficient. For example, a generative AI model could be used to create a simulation of a busy intersection, and then driver-less cars could be trained to navigate this intersection safely and efficiently.

Generative AI could also be used to create new features for driver-less cars. A generative AI model could be used to create new navigation apps that are more intuitive and easier to use. The future of generative AI and driver-less cars is very bright. These two technologies have the potential to revolutionize transportation and make our lives safer, easier, and more enjoyable. 

Thursday, January 14, 2021

Developers procrastinating their way to better code

In The surprising habits of original thinkers Adam Grant describes the sweet spot between precrastinators and procrastinators for optimal original thought.

In short, completing your tasks far ahead of time or at the very last minute leads to less original thought.

Think of this in terms of a developer getting their tasks done.

Typically we're pulling tickets off job queue. If you're a developer then you've probably worked with GitHub Issues or Jira or some similar system. Tickets are usually prioritized from most important to least important.

If we've never seen the ticket or problem statement that we've just pulled to work on then our original thought process for solving this is the equivalent of an extreme procrastinator. We're learning about the problem at the same time as we're solving it. This leads to less original solutions and probably the first solution we think of instead of the best solution.

One way to mitigate this is to have grooming sessions. We meet as a team and go through the backlog of tickets and in addition to prioritizing them the ticket creator will explain the ticket in more detail so that we can estimate the effort in completing this. More details are also added to the ticket if needed. The team sometimes discusses possible approaches to solving this. Most important, in my opinion, is that this problem has been dropped into your subconscious and whether you like it or not your brain is noodling on a solution in the background.

Something that I feel we don't do often enough is "kick tickets down the road." In other words, there are tickets that we should procrastinate on that we don't.

Some tickets are critical and need to be done immediately. Major security vulnerabilities fall in this category.

If the ticket is not urgent and can be punted and you or the team are uncomfortable with the current array of possible solutions then do it. Kick that ticket down the road. And do it intentionally and without shame. Add a comment on the ticket stating exactly that:

The team discussed this issue on DD-MMM-YYYY and we were unable to come up with an idea for addressing this in way that we wanted so we're putting this back in the queue to mull on.

Wednesday, September 20, 2017

How do you measure the value of a PR Review?

How do you measure the value of a PR Review?

Thinking out aloud. You employ a developer whose sole job is to review PRs submitted by other developers. How do you measure the value that developer is providing?

The formula

The time saved by not having to (1) debug issues in production plus (2) the time taken to fix the code plus (3) the time taken to review and deploy the fixes multiplied by the cost of that time.


The revenue generated by the customers who would have signed up if that bug did not exist.


The revenue that would have been maintained if those customers had not canceled their subscriptions because of the bug that got through.



The cost of the review-developer doing the PR.


The other noise created by the review-developer trying to make the value look obvious by over-commenting on the PR. (A quick aside: If you are being measured or incentivized to do PRs then you will have a tendency to comment more frequently to show value. Sometimes that will be value but sometimes it will be costly noise.)


Review Feedback and Measurement

What if the PR submitter and added value markers to each comment that the reviewer made?

Given that the submitter has to read through all the comments anyway it should be very little effort to tag the comments with a marker. In Stash there's the "like" link and in GitHub you have a collection of emojis.

An automated scripts could aggregate the tags assigned to the reviewer.

What is a valuable comment on a PR?

This brings up the question on value in comments.

"I like that you used reduce() here instead of filter() and map()" - is that a valuable comment? I think that it is. Perhaps sometimes it needs more context. The value here is that your telling the PR submitter that you (1) read through it and understood it and (2) thought that a particular section was well written.

A comment like this has even more value if the submitter has not used an optimal pattern (that you like) in the past or in another section of the PR. I have seen this where a submitter may have used a "perfect" pattern in one file and then something "messy" to solve the same type of problem in another module. Highlighting the good code as well as the "needs improvement" code has as much value.

Thursday, September 7, 2017

The Architectural Decision is Wrong

Every architectural decision we make today will be wrong at some point in the future.

It will be wrong for one of two reasons:

  1. Information we didn't know at the time we made the decision.
  2. Technology that's changed or become available since then.

This also goes for the technology stack that you choose.

When you make your decision, list all of the reasons why you're taking that path. Do it in your Jira or GitHub Issues (or Whatever tracking system) in the first post or description of the ticket.

Copy/paste the top of this blog post that includes the two numbered points into the bottom of the description and stop wasting anymore time on this. This is a reminder that it's going to be wrong one day.

Months later when you can't remember why you made this decision you can revisit this ticket and make notes about what you could have done at that time to make a different decision if needed. Or perhaps that was the best decision you could have made at that time.

Thursday, March 31, 2016

Coaching and the PR Code Review

There is rarely a time when all the members of a team have a deep understanding of all the technologies being used on a project. On top of that the technologies that they're using are evolving and all team members need to keep up with the changes. Some members will learn new techniques and features earlier than the others.

An effective way of cross training team members is through the Pull Request (PR) Code Review.

Comments on a typical PR will be:
  • This looks like a mistake because...
  • This doesn't conform to our standard because... (This should rarely happen because a lint step should have caught this earlier. If it is happening then the lint rules should be reviewed.)
  • This works. I would do it this other way because...
  • Please add/modify a test for this.
In addition to that, as a reviewer, you should be asking:
  • I'm new to this technology, what does that do?
  • Please can you add a comment above this line explaining what this does?
  • I haven't seen this syntax before, is it the same as?
  • This is fantastic! I never knew you could... (Call it out when someone does something you haven't seen before or does something really well and you learned from it. Everyone loves positive feedback.)
If you submit a PR that has new features or techniques then consider immediately adding comments (in the PR) about them to help the reviewers if know that they will not understand what something does.

The question then arises, should I add a comment in the code or a comment in the PR? My take on this is that comments about how a language, framework or technology works, in isolation, should go in the PR comments. You don't want to clutter the code with information that can be found through a search on the web. The comments that go in the code are how the code works.

These are the advantages of learning through code reviews:
  1. The reviewer is learning in the context of the project domain. You don't get more real world than that. Not only will the reviewer be learning new syntax/constructs but they will also be understanding the business domain and gain knowledge in part of the code base that they may need to maintain in the future. Compare this to learning contrived examples in a class.
  2. The reviewer is learning a subset of the technology as it pertains to this domain. We would all love to learn all aspects of each technology we touch. Time constraints do not make this possible. Learning like this in situ provides the most time efficient way of learning the essentials. Again compare this to class learning where you may gain some super interesting knowledge and then not apply it.
  3. The coach, the person who submitted the PR, is forced to look at and explain their code. When you ask your seemingly naive question you may see a comment like "This construct in this language does... Now that I look at it again I see a problem/way-that-I-could-improve-it."
Using this technique you get a better code review on the PR and a great contextual training session at the same time.

Monday, November 23, 2015

MomentJS Notes


npm i moment --save
npm i moment-timezone --save


var moment = require('moment-timezone');

Create moments in and out of DST:

var cdt ="2015-07-23 08:30", "America/Chicago");
var cst ="2015-11-23 08:30", "America/Chicago");

Check that offset from UTC is what you'd expect:'America/New_York').format()
> '2015-07-23T09:30:00-04:00''America/New_York').format()
> '2015-11-23T09:30:00-05:00'

Check that zone aware format output is what you'd expect:'America/New_York').format('HH:mm:ss')
> '09:30:00''America/New_York').format('HH:mm:ss')
> '09:30:00'