Synth

Language Model Powered Workspace

Language Model Powered Workspace

Synth is a multi-modal workspace that explores how language models can rethink the note-taking experience.

Synth is a multi-modal workspace that explores how language models can rethink the note-taking experience.

Synth is a multi-modal workspace that explores how language models can rethink the note-taking experience.

Overview:

The summer of 2023 saw a significant boom in the development of Large Language Models and their capability to interpret and generate semantic textual information.

Inspired by this technological shift, I decided to dedicate my thesis project towards developing novel user experiences powered by multi-modal AI.


The end result was a note-taking style information management application that is more contextually aware of the user’s needs and can better retrieve information.

Overview:

The summer of 2023 saw a significant boom in the development of Large Language Models and their capability to interpret and generate semantic textual information.

Inspired by this technological shift, I decided to dedicate my thesis project towards developing novel user experiences powered by multi-modal AI.


The end result was a note-taking style information management application that is more contextually aware of the user’s needs and can better retrieve information.

Overview:

The summer of 2023 saw a significant boom in the development of Large Language Models and their capability to interpret and generate semantic textual information.

Inspired by this technological shift, I decided to dedicate my thesis project towards developing novel user experiences powered by multi-modal AI.


The end result was a note-taking style information management application that is more contextually aware of the user’s needs and can better retrieve information.



Team:



Team:

Duration:

Duration:

Tools:

Tools:

Mentors:

Mentors:



Heteng Li, Haesung Park



Heteng Li, Haesung Park



Heteng Li, Haesung Park

3 Months Sept - Dec 2023

3 Months Sept - Dec 2023

3 Months Sept - Dec 2023

Figma, After Effects

Figma, After Effects

Figma, After Effects

Hugh Dubberly, Yoon Bahk, Kyle Steinfeld, Björn Hartmann, Eric Paulos

Hugh Dubberly, Yoon Bahk, Kyle Steinfeld, Björn Hartmann, Eric Paulos

Hugh Dubberly, Yoon Bahk, Kyle Steinfeld, Björn Hartmann, Eric Paulos

See Synth in Action

01
Context Awareness


While applications such as ChatGPT and Claude are capable of answering general inquiries about facts and generating comprehensive answers, the responses are often not tailored to each individual user.


This means that the experience of conversing with an LLM falls apart when more personal, contextual questions are being asked.

A good example being:


“What was the central argument of the art history essay I wrote on Picasso last year?”


As a result, users often need to input large amounts of information for a desired output, often know as “prompting”.

01
Context Awareness


While applications such as ChatGPT and Claude are capable of answering general inquiries about facts and generating comprehensive answers, the responses are often not tailored to each individual user.


This means that the experience of conversing with an LLM falls apart when more personal, contextual questions are being asked.

A good example being:

As a result, users often need to input large amounts of information for a desired output, often know as “prompting”.

“What was the central argument of the art history essay I wrote on Picasso last year?”

“What was the central argument of the art history essay I wrote on Picasso last year?”

01
Context Awareness


While applications such as ChatGPT and Claude are capable of answering general inquiries about facts and generating comprehensive answers, the responses are often not tailored to each individual user.


This means that the experience of conversing with an LLM falls apart when more personal, contextual questions are being asked.

A good example being:

As a result, users often need to input large amounts of information for a desired output, often know as “prompting”.

“What was the central argument of the art history essay I wrote on Picasso last year?”

Contextual Editing


As an editor, Synth can understand what you are doing at any moment, helping you become a better, more efficient writer.

Contextual Editing


As an editor, Synth can understand what you are doing at any moment, helping you become a better, more efficient writer.

Contextual Editing


As an editor, Synth can understand what you are doing at any moment, helping you become a better, more efficient writer.

02
Integration


During user interviews with new adopters of Large Language Models, many reflected the lack of continuity with existing workflows as a hindrance towards using AI applications more often.

Instead of an additional feature embedded within existing experience, new LLM applications are designed as standalone web interfaces, which creates a gap in the user flow.

This became a clear imperative to design Synth as an all-in-one integrated application, where language model AI would reside as a co-editor, and seamlessly function alongside other experiences.

02
Integration


During user interviews with new adopters of Large Language Models, many reflected the lack of continuity with existing workflows as a hindrance towards using AI applications more often.

Instead of an additional feature embedded within existing experience, new LLM applications are designed as standalone web interfaces, which creates a gap in the user flow.

This became a clear imperative to design Synth as an all-in-one integrated application, where language model AI would reside as a co-editor, and seamlessly function alongside other experiences.

02
Integration


During user interviews with new adopters of Large Language Models, many reflected the lack of continuity with existing workflows as a hindrance towards using AI applications more often.

Instead of an additional feature embedded within existing experience, new LLM applications are designed as standalone web interfaces, which creates a gap in the user flow.

This became a clear imperative to design Synth as an all-in-one integrated application, where language model AI would reside as a co-editor, and seamlessly function alongside other experiences.

Co-Editing


Synth is a first and foremost a co-editor. By integrating an LLM into your text editor, Synth can contextually understand text prompts and generate relevant outputs.

Co-Editing


Synth is a first and foremost a co-editor. By integrating an LLM into your text editor, Synth can contextually understand text prompts and generate relevant outputs.

Co-Editing


Synth is a first and foremost a co-editor. By integrating an LLM into your text editor, Synth can contextually understand text prompts and generate relevant outputs.

03
Text Heavy Interactions


When analyzing language model interfaces, we discovered that most interaction with LLMs rely heavily on textual inputs.


While this is beneficial for early adoption due to its flexibility in a wide range of use cases, text based interfaces often require large amounts of user input before reaching a desired outcome.


These additional interactions that could be eliminated with thoughtfully designed user experiences.

03
Text Heavy Interactions


When analyzing language model interfaces, we discovered that most interaction with LLMs rely heavily on textual inputs.


While this is beneficial for early adoption due to its flexibility in a wide range of use cases, text based interfaces often require large amounts of user input before reaching a desired outcome.


These additional interactions that could be eliminated with thoughtfully designed user experiences.

03
Text Heavy Interactions


When analyzing language model interfaces, we discovered that most interaction with LLMs rely heavily on textual inputs.


While this is beneficial for early adoption due to its flexibility in a wide range of use cases, text based interfaces often require large amounts of user input before reaching a desired outcome.


These additional interactions that could be eliminated with thoughtfully designed user experiences.

Visual Reasoning


Inspired by Maggie Appleton’s Language Model Sketchbook, one of the ways Synth breaks away from text heavy interfaces is it’s implementation of new interaction modalities with languages models. Instead of prompting, visual reasoning suggests possible logical relationships between ideas, and generates a tree like structure to visual concepts.

Visual Reasoning


Inspired by Maggie Appleton’s Language Model Sketchbook, one of the ways Synth breaks away from text heavy interfaces is it’s implementation of new interaction modalities with languages models. Instead of prompting, visual reasoning suggests possible logical relationships between ideas, and generates a tree like structure to visual concepts.

Visual Reasoning


Inspired by Maggie Appleton’s Language Model Sketchbook, one of the ways Synth breaks away from text heavy interfaces is it’s implementation of new interaction modalities with languages models. Instead of prompting, visual reasoning suggests possible logical relationships between ideas, and generates a tree like structure to visual concepts.

Takeaways


When analyzing language model interfaces, we discovered that most interaction with LLMs rely heavily on textual inputs.


While this is beneficial for early adoption due to its flexibility in a wide range of use cases, text based interfaces often require large amounts of user input before reaching a desired outcome.


These additional interactions that could be eliminated with thoughtfully designed user experiences.

Takeaways


When analyzing language model interfaces, we discovered that most interaction with LLMs rely heavily on textual inputs.


While this is beneficial for early adoption due to its flexibility in a wide range of use cases, text based interfaces often require large amounts of user input before reaching a desired outcome.


These additional interactions that could be eliminated with thoughtfully designed user experiences.

Takeaways


Reflecting on my journey with Synth, I realized the pressing need to transcend traditional text-based interactions. By integrating experiences such as visual reasoning and other non-textual modalities, we can create more intuitive and efficient interfaces that streamline interactions with LLMs. This project underscored the importance of developing novel, multimodal user experiences for the generative AI future.

See Synth in Action