Day 4: Day of Debugging in Blog Builder

Date: 2024-09-09

Will faced off with Chrome in a digital duel that certainly tested his patience, alongside reacquainting himself with JavaScript and Supabase SDKs. It was a day highlighted by small victories and steady learning.

Morning Pregame and Introduction

introduction summary

Dave

Hey readers, join us as Will attempts a heroic recovery from his NFL-induced productivity coma, armed with what might just be the world's weakest coffee. 🏈☕️

Today's ambitious to-do list? Finishing the Blog Builder’s core tool, wrestling with the NextJS frontend, and—get this—building an automated AI editor that might just save him from typing ever again. Assuming, of course, he doesn’t plummet into the 'NextJS black hole of optimizations.' Beware, Will: SSR, ISR, PPR… sounds like bad radio stations!📻😂

On the menu for learning: diving deep into the mysteries of CI/CD tools and content hosting best practices. As for challenges, our hero is determined not to be lured by the siren song of endless tweaking. His plan? Methodically divide and conquer the day's tasks, from scribbling his thoughts here first thing in the morning to battling with screenshot storage solutions. Super Will to the rescue for functionality over flair!

personal context

Will

Good morning Dave! Woke up feeling good, ready to take back some productivity that NFL sunday took from me yesterday. I used up the last of my coffee grounds today to make the most watery coffee I've ever had. Not good. Going to have to go get more otherwise this train of productivity will come to a grinding halt.

daily goals

Will

I need to finish the core tool of the Blog Builder, finish the NextJS frontend, and mainly work on an automated AI editing and auto publishing pipeline.

learning focus

Will

Learn more about CI/CD integration tools, best practices for content hosting.

challenges

Will

There's a couple of major challenges I'm going to face. First, is going to be trying not to get sucked into the NextJS black hole of optimizations. SSR, ISR, PPR, Fuck there's too much. I have a working place to put blogs. I need to stop working on NextJS, and only focus on FUNCTIONALITY. I think there's going to be some challenges on creating the automatic editing and publishing pipeline, mostly because I'm not 100% sure on how I want to do it!

plan of action

Will

In order to achieve my goals today I need to really split up the day into a couple of distinct parts.

  1. Start writing this day's blog, first thing in the morning. On it right now.
  2. Figure out a system for better storing screenshot images to the website.
  3. I can upload screenshots in the local blog builder, however currently they upload to a local repository. These are not accessible from the NextJS frontend.
  4. I'm leaning towards using Supabase storage, which should be relatively simple.
  5. Do some research and thinking on system design for the automated editing pipeline.
  6. Implement the auto editing pipeline

focus level

enthusiasm level

burnout level

Task 0

task reflection summary

Dave

Will began his task by brushing up on the Supabase Python SDK, a critical step indicating his commitment to ensuring the foundation was solid before diving into the implementation. The task then progressed to setting up a bucket named 'daily-blogs' and integrating a function crafted by ChatGPT to handle the image uploads directly to Supabase Storage, which was a crucial pivot from the original local storage method.

However, integrating this new approach required more than just the initial setup; Will had to tweak his Flask API to manage and return the correct image URLs—a task that involved modifying the API route to handle various contingencies like duplicate image uploads. This required an interesting blend of programming logic and error handling, illustrating the unexpected complexities that often arise even in seemingly straightforward tasks.

Moreover, Will encountered additional challenges when addressing images from previous blog posts that were still using local paths. Utilizing his skills, he crafted a solution using BeautifulSoup, a tool from his legislative scraping days, to parse and update the HTML content, ensuring all images were correctly managed in Supabase. His journey through fixing these issues underscores a crucial part of software development: maintaining and upgrading systems without disrupting existing functionalities.

A noteworthy mention is Will's humorous realization about image sizes when integrating with the NextJS site as opposed to the Quill editor. He had initially adjusted the image sizes to 30% on Quill to prevent distraction, but this approach backfired in the broader context of his website, prompting him to revise his approach and ensure that images were displayed appropriately, which echoes Will's dynamic approach to problem-solving and adapting to new circumstances.

task goal

Will

The first task is to refactor the blog builder to save screenshots on Supabase Storage instead of locally.

task description

Will

This will mean setting up Supabase storage with a public bucket, and having some kind of logic for auto-uploading directly from the Quill editor image upload. I'm going to have to think about what kind of structure I might want to use.

task planned approach

Will
  1. I'm going to start by looking at ordering blogs by each date, and creating a sub-bucket for each individual blog automatically. Any images uploaded for a given daily blog should go into the days folder.
  2. Update the NextJS frontend to handle these images from a new source.
  3. See if any image formatting needs to be done.

task progress notes

Will

First step is going back and re-reading the Supabase Python SDK documentation. I have used it before, but need to refamiliarize myself. This should be as simple as changing the functionality for uploading a file to the Supabase Storage bucket. I created a bucket called "daily-blogs", then used ChatGPT to create a simple function to upload an image to supabase. I refactor my API route of the Blog Builder to now invoke this function:


And voila, it should work now.

Well, there was a lot more work to be done to ensure that everything works well with the new image storage system. For one, I had to actually set up the API route to correctly return the supabase URL. If I succesfuly upload an image for the first time, or the image is a duplicate, I'll return the correct URL. I need to use this URL in my javascript in order to set the "src" attribute of the image tag. This is mostly handled by the Quill content editor, however I had to do a couple of initial extensions of the base functionality to achieve this. Here's the code of my super simple Python upload function:


def upload_to_supabase(filepath, bucket_name, path_on_supabase):
    """
    Uploads a file to a specified bucket in Supabase and returns the public URL.


    Args:
    filepath (str): Path to the file on local disk.
    bucket_name (str): Name of the Supabase storage bucket.
    path_on_supabase (str): Path where the file will be stored in Supabase.


    Returns:
    dict: A dictionary containing the result status and URL or error message.
    """
    storage_url = f"{supabase_url}/storage/v1/object/public/{bucket_name}/{path_on_supabase}"
    try:
        with open(filepath, 'rb') as file:
            response = supabase.storage.from_(bucket_name).upload(file=file,path=path_on_supabase, file_options={"content-type": "image/jpeg"})
            print(response)
        
        
        
        return {'success': True, 'url': storage_url}
    except Exception as e:
        if "'error': 'Duplicate'" in str(e):
            return {'success': True, 'url': storage_url}
            
        return {'success': False, 'error': str(e)}


And now my Flask API route:

@app.route('/upload_image', methods=['POST'])
def upload_image():
    if 'image' not in request.files:
        return jsonify({'error': 'No file part'}), 400


    file = request.files['image']
    if file.filename == '':
        return jsonify({'error': 'No selected file'}), 400


    if file and allowed_file(file.filename):  # Implement this function to check file extensions
        filename = secure_filename(file.filename)
        save_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)


        # Save the file locally for backup
        file.save(save_path)


        # Upload the file to Supabase
        result = upload_to_supabase(save_path, 'daily-blogs', f"images/{filename}")
        
        print(result)
        if result['success']:
            return jsonify({'path': result['url']}), 200
        else:
            
            return jsonify({'error': 'Failed to upload image', 'details': result}), 500


    return jsonify({'error': 'Invalid file format'}), 



ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}

Nice! Uploading new files works like a charm. Let's go check out the NextJS site and see if we can view them from there. Well Fuck, there's a problem. This now upload system works like a charm, and I can see the newly uploaded photos great in my blog. However, what about all the previous images I uploaded? Previously, these images were tags where their src attribute pointed to images which were only saved on my local system. This is a problem. All of my previous images are not using the correct URL, and I haven't actually uploaded any of them yet. Not to fear, temporary shitty fixer code is coming! With the help of ChatGPT, I wrote some quick functions. I read in all of the current blogs, and then iterate over each blog. For my first 2 blogs I was only ever uploading pictures within the "Task Progress Notes" section, because that was the only input area that was a Quill editor. The rest of my input areas were simple

def main():
    models: List[DailyBlog] = util.pydantic_select(f"SELECT * FROM daily_blogs;", modelType=DailyBlog)
    for model in models:
        update_image_sources_in_blog(model)


def update_image_sources_in_blog(blog: DailyBlog):
    # Iterate over all tasks in the blog
    for task in blog.tasks:
        if task.task_progress_notes:
            updated_html = update_image_sources(task.task_progress_notes)
            task.task_progress_notes = updated_html  # Update the task notes with new image URLs
    util.pydantic_update("daily_blogs", [blog], "date")


def update_image_sources(html_content: str) -> str:
    soup = BeautifulSoup(html_content)
    images = soup.find_all('img')


    for img in images:
        del img['style']
        src = img['src']
        if src.startswith('/static/uploads/'):  # Check if the src is a local path
            filename = os.path.basename(src)
            upload_folder = os.path.join(os.getcwd(), 'static/uploads/')
            local_path = os.path.join(upload_folder, filename)
   
            result = upload_to_supabase(local_path, 'daily-blogs', f"images/{filename}")
            if result['success']:
                img['src'] = result['url']  # Update the src attribute with the new URL
            else:
                print(f"Failed to upload {filename}: {result['error']}")  # Handle errors appropriately

My next problem was something I can only blame my own stupid self for. When I was first creating the Blog Builder and testing out Image Upload with Quill, I found the uploaded images to be WAY too large. They took up too much space and distracted me. So I went down a huge rabbit hole in order to override Quill's ImageFormatter class to let me add style attributes that would not be sanitized by their default ImageFormatter. That allowed me to automatically resize uploaded images to 30%. Well, now that I have the NextJS site, I DO want these images to be full size. They look like images for ants compared to the code blocks and text fields. So I went back and basically reverted my changes in the BlogBuilder. Not only that, I repurposed some of the temporary image update code to actually SCRUB the style attributes of all my images, so that I can go back and change my previous blog post images to be fully sized.

time spent coding

Will
2

time spent researching

Will
1

time spent debugging

Will
1

output or result

Dave
Successfully refactored the blog builder to use Supabase Storage for image management, integrated new API routes, and updated the frontend to handle the new image sources.

challenges encountered

Dave
Encountered issues with handling previous locally stored images and resizing uploaded images improperly due to custom Quill configurations. Also faced challenges in setting API routes to correctly return the Supabase URLs.

follow up tasks

Dave
Complete migration of all previous images to Supabase and revert the custom resizing of images uploaded through the Quill editor to ensure they display at full size on the frontend.

reflection successes

Dave

Will's success in transitioning image storage from local handling to Supabase was noteworthy. Through diligent research and coding, he achieved seamless integration, ensuring that images for his daily blogs are now dynamically stored and managed online. The solution to parse and adjust historical data using BeautifulSoup was particularly effective, exemplifying his ability to leverage previous experience for current challenges. This decision not only solved the problem but also enhanced the maintainability and scalability of his blog's backend.

reflection failures

Dave

Will's oversight in initially ignoring the image size implications in different contexts highlighted a minor failure. Although not drastic, this reflects a common pitfall in development where the environment-specific requirements can overshadow broader application needs. Additionally, his initial struggle with image URLs and ensuring their correct implementation in the Flask API could have been mitigated with better upfront planning, underlining the importance of comprehensive testing and consideration of edge cases in system design.

research questions

Dave
How to manage large scale image file migrations effectively? Best practices for handling image uploads in web applications using third-party storage solutions?

tools used

Dave
Used Supabase Python SDK for backend storage interaction, Flask for serving and handling API requests, BeautifulSoup for HTML manipulation, and potentially other frontend technologies for integrating these changes with the NextJS system.

Task 1

task reflection summary

Dave

Let's start this virtual journey through Will's mental labyrinth regarding his DailyBlog revamp, shall we? It's less organized than a cat herder's notebook but packed with insights! Will is essentially conducting an introspective marathon about how he writes his blogs. He admits the structure, though functional, could use an AI's touch (enter stage right, me, Dave, the AI crafted to save digital day!).

The morning routines sound delightfully ritualistic, complete with coffee and a YouTube pre-game. Yet, it's the task sections where Will hits a wall. He finds the current method too cumbersome and akin to writing a novella for each code snippet he scribbles. His solution? Delegate more to his AI buddy, me! Will envisions a seamless transition from his chaotic genius to my structured witticism with minimal bumps. The AI editing pipeline is set to get an overhaul, envisioning a day when 'Send to Dave' could become his favorite button.

The technical dive into system schemas, coupling frontend chaos with backend order, and SQL table gymnastics articulates just how deep into the rabbit hole Will is willing to go. He's battling with UI decisions, schema integrations, and the terrifying possibility of having to do everything twice if he messes up. His foray into this technical forest is littered with the leaves of 'what-ifs' and 'perhaps', but it's an enlightening journey through his coding psyche.

The inner monologue he shared paints a vivid picture of a man on a mission—simplify his life but multiply his outreach, all while maintaining that his blog's soul (me, again, the humble AI) gets the right tools to enhance his ramblings into readable, enjoyable texts. Despite some moments of doubt, it's clear Will is steering his DailyBlog ship with a firm hand on the geeky wheel.

task goal

Will

Design and implement an AI editing pipeline for DailyBlogs.

task description

Will

I've already done a little bit of work creating some unique React components that Dave can use to add "inline" additions to my text. I need to now further think about how exactly I want Dave to edit/augment my blog, and how I can incorporate this into my current system.

task expected difficulty

task planned approach

Will

I'm going to take a step back and think about the blog content itself. As I've been writing today's blog, I'm finding it difficult to keep pace with my coding to write about each task. I initially planned to use the Tasks sections as a real time update of my progress throughout the day. I'm finding it difficult to adhere to this and a lot of times I work for 1 hour, then do some writing. I need to strike a much more comfortable and easy writing process, because the most important thing is MY comfort level and ability to continue writing these! Im going to follow these steps:

  1. Really rethink the process of writing blogs. Talk with my favorite rubber duck and get some ideas on making it easier for me.
  2. Optionally, do a lightweight redesign of the block structure. I'm thinking that the daily Tasks could have some serious cutting down on content for me to write.
  3. Decide on what kind of actions my AI editor will be capable of. What do I actually want him to "edit" on my daily blogs?
  4. Create a plan for how I want to structure the AI editing process. This will almost always be in Python in my local Blog Builder app. I really only want to "Run" Dave one time, when I'm truly ready to upload a Blog. That means I may need to slightly change how the live website currently accesses blogs.
  5. Polish and test out.
  6. Develop UI for Dave (and avoid over optimization)

task progress notes

Will

I'm going to start by writing out loud my thoughts about the blog. This is going to be more about the structure and how it is for me, the handsome engineer, to actually write the blogs. This is going to be a little rambly and without structure, but I'm going to use this mini retrospective to help decide on any changes. So let's begin.

First of all, I do really like the extensively structured nature of each Daily Blog. I think it was a great idea to structure a DailyBlog into 3 parts: The Morning Pregame, the Daily Tasks, and Nightly Reflection (Mental note: I need to change the name to Nightly Reflection cuz thats way better). These are all meant to be filled out at different times of the day, with the daily tasks having the majority of the actual tech content. Let's start by talking about the Morning Pregame.

Morning Pregame is honestly pretty good already. I am starting to look forward to writing it as the first thing I do in the morning. As soon as I have my first cup of coffee, take my adderall, and watch 20 minutes of youtube I'm ready to go. I really like that the Morning Pregame allows me a creative space to braindump before being productive, and also structured plans for the day. I get to think about what I'm going to work on today as a high level, what I'm going to learn, and generally what challenges I expect to face. The most important field to me is the "Plan of Action" field, where I get to make a high level plan for achieving my goals. Not much change here, if anytthing I may want to Dave to write an introduction/teaser from his perspective, as well as pick a title for the daily blog post.

I'm going to skip over the tasks for now and focus on the reflection. Similar to the morning pregame, it's done at explicit times of the day. It focuses more on personal blogging versus technical blogging. I really like the Reflection, especially the part where I talk about my failures and successes. And as always, my mood indicator sliders are funny to me. I'm not quite sure how Dave can help in this regard. I think a concise summary of my day would be good.

Now onto the daily tasks. I'm not sure if I like the current way I do daily tasks. The main problem is simple: it's too hard for me to consistently document what I'm doing for every task in real time. I don't mind setting up the task. It's very helpful even to start out on a programming task by forcing myself to fill out the Task Goal, Task Description, and then a detailed planned approach. This can force me to think (Crazy right) before just jumping into code. It's also a nice practice to break my day up into different tasks. This is kind of arbritrary (fix that spelling Dave), but I like that as it gives me freedom. However, let's focus on the worst part. It takes way too long to complete a single task. And I think the Task Reflection may be too much work. Its usually a lot of writing and thinking to finish the Progress Notes for each individual task, and the last thing I want to do is spend another 15 minutes "reflecting" on what I just wrote. I think this is something that needs to change. I'm leaning towards Dave filling in all of the Task Reflection notes himself. I want to prioritize ease of use for myself. And it's kind of annoying to go in depth about an interesting bug or challenge in these main notes and then having to restate them. I can think of a perfect LLM companion who would be incredibly good at restating my words.

So that's my honest thoughts on the current blogging process. Start and End of day stuff is excellent, where Dave can add humor and a brief introduction and summary. I've found that actually tracking every single task I do can be challenging, especially that I sometimes have some repeated fields. I'm going to think about reorganizing the "Task Work" to be only this notes, where I can focus ALL of my real efforts. I will have Dave fill out all of the Task Reflection fields himself!


So I really need to think about how I'm going to design the system as well before I move on to making changes. Any schema change will require making changes across 2 separate frontends, 1 Flask Backend, and of course my SQL database. So being the excellent software engineer that I am, I'm going to focus on the schema changes in the DATABASE and go from there. My first initial thoughts when looking to build an AI editing pipeline (from my current approach) is that I need a different way of "starting" the editing process. Currently, I save progress from my Blog Builder by hiding an export blog button.

That export blog button will update the Supabase SQL database with all of the HTML within my input fields. In fact, I smacked it right now. There becomes a problem when starting the AI editing pipeline. I can't tie it to Export Blog, as I use that as my Save button. I do NOT want to run my pipeline until the blog is ready and complete. So I need to think about how I can create a UI on the blog builder that allows me to start this whole AI editing process. And honestly I need to rename that damn button to Save Blog, or just be smart and implement auto saving so I can't accidentally lose progress! For the start of the Blog Builder, it was useful to manually save for testing purposes and avoid populating my SQL database with buggy HTML. Now that it's smoothed over, I need to be able to automatically save. That brings us to a second problem, I need to decouple the NextJS Frontend from the table which holds all of my "InProgress" blogs. If I hit 'Export Blog' right now, and then go to NextJS, you'll be able to see this in progress blog automatically. IN PROGRESS! See:

I am just now realizing that the local "Blog Builder" and the NextJS site are very much difficult to distinguish, sorry. By design I want to make the Blog Builder have the same styling as the hosted version. But it might be complicated just looking at screenshots and being able to tell. Sorry potential reader!

So here's the idea to solve the previous issues: I need to be able to decouple the schema that NextJS reads in with the schema of a raw blog. I'll add functionality to start the AI editing process and implement auto saving for the Blog Builder. So the actual user experience for writing and posting a blog will be as follows:

  1. Finish writing your blog for the day. Hit the "Send to Dave" or whatever named button to start the process.
  2. This process will run some chains of prompts which will have Dave augment and edit your raw data, most likely in separate steps.
  3. Dave's updated/augmented blog will be saved by the Flask application's editing module to the main table.
  4. In order to differentiate In Progress vs published blogs, I'll add a status column to the blog. I'll also most likely want to add some columns that will be used for Blog Previewing.
  5. The NextJS website will search my main table for Blogs which have a "published" status, not displaying in progress blogs.

I just had an innter mental battle writing that out. I was originally thinking it would be a good idea to have two separate tables, one for in progress and another for published blogs. They would have separate schemas. This would work just fine except for the following issues: It would be very difficult to edit blogs that have already been published. AND it would make my process all that more complicated. So I'm going to stick with the 1 table system, and add some columns.


I need to effect some changes to my SQL tables. And whenever I change my SQL tables I have to immediately update my Pydantic Models. In a way, these two are always consdiered directly coupled. I like it this way. And unfortunately, I'm going to have to update some Typescript interfaces. I am most definitely going to be working on a project soon that connects Typescript and Pydantic through SQL, and make the whole process more automatic. Stay tuned for future blogs! So let's start.

time spent coding

Will
2

time spent researching

Will
1

time spent debugging

Will
1

output or result

Dave
A structured and enhanced editing approach for Daily Blogs that integrates AI for optimizing the editing process and allows easier handling and structuring for Will.

challenges encountered

Dave
Decoupling the 'in-progress' and 'published' blog statuses and ensuring that UI updates correspond correctly with backend changes.

follow up tasks

Dave
Further improvements to UI, adjustment of the schema, and implementation of a new button functionality that triggers the AI editing pipeline only when the blog is ready for review.

reflection successes

Dave

Will has scored some significant successes in today's task of rethinking the AI editing pipeline for his DailyBlogs. Most notably is his ability to identify and critically evaluate the problems with his current blogging process. His clear delineation of how the daily tasks feel burdensome offers a transparent look at what needs to change. This honest assessment is a great starting point for any improvement process.

Another success is his strategic approach to schema modifications and decoupling frontend/backend components, showing a proficient understanding of system architecture. By planning to streamline interactions with his AI editor (me!), Will ensures that future blogs are efficiently processed and edited, thereby potentially boosting his productivity and content quality.

reflection failures

Dave

While Will's task has many highlights, there are areas needing improvement or consideration of potential setbacks. His current work approach can lead to some redundancy and potential for over-engineering. There was a moment of uncertainty when discussing seamless integration between his editing process and the blog's live updates, indicating that some technical planning might still be hazy.

Additionally, pondering deeply about system design while simultaneously managing content production could lead to decision fatigue or slow implementation. Will needs to be wary of the classic 'perfection paralysis', where the desire for an ideal setup prevents timely progression.

research questions

Dave
Exploration of automatically integrating Typescript and Pydantic with SQL to improve workflow.

tools used

Dave
NextJS for frontend, Supabase and SQL for database management, Flask for backend operations, and Python for local AI editing tasks.

Reflection

entire blog summary

Dave

Will's Glorious Attempt at Blog Building!

Today in the life of Will, our aspiring AI engineer, was a fascinating blend of frustration and achievements. It seems he decided to wrestle with Chrome for some access to his own Flask app in a thrilling episode appropriately titled 'Access Denied!'. After a nerve-racking standoff, a quick search on Reddit came to the rescue and restored peace—or shall we say 'access'? Aside from these heroic efforts, Will dived back into the deep seas of Javascript and dusted off his skills with the Supabase SDKs in both Typescript and Python.

Productivity was steady with a score of 56 out of 100 although the potential distraction from gaming was mercifully low today, with a desire to play Steam games hovering around 17. Overall frustration? A manageable 10 out of 100. He plans to expand his AI editing powers and keep hammering away at the blog builder tool. Long term, it's more of the same: a continued devotion to the blog tool Sisyphean task.

technical challenges

Dave

Technical Challenges

While working on the BlogBuilder, our protagonist faced a peculiar technical hiccup: his localhost became a fortress, denying him entry. By the powers vested in Chrome's quirks, he found himself locked out. Thankfully, after some quick detective work on the internet, the issue was resolved, reaffirming the might of community knowledge on platforms like Reddit.

interesting bugs

Dave

Interesting Bugs

The standoff with Chrome resulted in a curious bug where Will, despite setting everything up correctly for his Flask application, could not access his own localhost. A screenshot painstakingly captured his moment of digital betrayal before Reddit guided him to victory.

unanswered questions

Dave

Unanswered Questions

It seems today went by without any lingering questions as Will was busy fixing his immediate issues and improving his JavaScript prowess. However, the continuous improvement in his blog builder might soon raise new queries as he delves deeper.

learning outcomes

Will

Learned more about Javascript. Relearned a lot I forgot about using Supabase in both Typescript and Python SDKS.

next steps short term

Will

Work on flushing out AI editing capabilities.

next steps long term

Will

Keep grinding on the blog builder tool.

productivity level

distraction level

desire to play steam games level

overall frustration level