How to use ChatGPT Vision to turn handwritten forms into data

Takeaways: ChatGPT can turn handwritten forms into data, even with sloppy handwriting. Defining a schema of the desired output helps. It makes mistakes. Output still needs to be validated and possibly fixed by hand. Can’t be automated with API yet. Still need to manually upload images to web application. Limit of four images per upload […]

Using ChatGPT to clean data: an experiment

One of the most annoying parts of data work is dealing with inconsistent entities: names of the same person spelled differently. Company names that rebranded, merged, or have varying suffixes like “Ltd.” and “Limited”. Standardizing data for accurate analysis can take days, sometime weeks, even with powerful tools like OpenRefine and Dedupe, which were made […]

How to extract entities from raw text with Spacy: 3 approaches using Canadian data

TL;DR: Use the en_core_web_trf transformer model with Spacy to get much more accurate named entity recognition with multilingual text. Entity recognition is one of the marvels or current technology, as least from a journalist’s perspective. There was a time journalists had to read through hundreds, maybe thousands of documents, highlight names of people, companies and […]

Getting tabular data from unstructured text with GPT-3: an ongoing experiment

One of the most exciting applications of AI in journalism is the creation of structured data from unstructured text. Government reports, legal documents, emails, memos… these are rich with content like names, organizations, dates, and prices. But to get them into a format that can be analyzed and counted, like a spreadsheet, usually involves days […]

4 ways to make self-updating Datawrapper charts

Datawrapper is right now the best tool for creating quick and simple charts. It’s so useful and feature-rich that news organizations that had their own in-house charting tool are switching over. One of its best features is the ability to connect a CSV file hosted on the web as a data source. This enables users […]

Using NLP to analyze open-ended responses in surveys

One of the final frontiers of data analysis is making sense of unstructured text like reports and open-ended responses in surveys. Natural language processing (NLP), with the help of AI, is making this kind of analysis more accessible. Libraries like spaCy and Gensim, although still code-based, are simplifying the process of getting insights out of […]

How data and transparency can restore trust in journalism: a speech to the Concordia Library Research Forum

This is the text of the keynote speech I delivered at the 2018 Concordia Library Research Forum. It has been edited slightly. Thank you for this lovely opportunity to be among you this morning. I feel especially honoured to deliver this keynote because I believe librarians and journalists share a special kinship because our jobs […]

What I learned in 3 hours about doing great data journalism at the New York Times and ProPublica

This post originally appeared on Medium. I was fortunate to take part in the Data Journalism Unconference hosted by Global Editors Network in New York this week. Attendees had the option of visiting two newsrooms for a “study tour” of their data teams. I chose the New York Times and ProPublica, two publications I admire. […]