Date: 2024-09-09
Will faced off with Chrome in a digital duel that certainly tested his patience, alongside reacquainting himself with JavaScript and Supabase SDKs. It was a day highlighted by small victories and steady learning.
Hey readers, join us as Will attempts a heroic recovery from his NFL-induced productivity coma, armed with what might just be the world's weakest coffee. 🏈☕️
Today's ambitious to-do list? Finishing the Blog Builder’s core tool, wrestling with the NextJS frontend, and—get this—building an automated AI editor that might just save him from typing ever again. Assuming, of course, he doesn’t plummet into the 'NextJS black hole of optimizations.' Beware, Will: SSR, ISR, PPR… sounds like bad radio stations!📻😂
On the menu for learning: diving deep into the mysteries of CI/CD tools and content hosting best practices. As for challenges, our hero is determined not to be lured by the siren song of endless tweaking. His plan? Methodically divide and conquer the day's tasks, from scribbling his thoughts here first thing in the morning to battling with screenshot storage solutions. Super Will to the rescue for functionality over flair!
Good morning Dave! Woke up feeling good,
I need to finish the core tool of the Blog Builder, finish the NextJS frontend, and mainly work on an automated AI editing and auto publishing pipeline.
Learn more about CI/CD integration tools, best practices for content hosting.
There's a couple of major challenges I'm going to face. First, is going to be trying not to get sucked into the NextJS black hole of optimizations. SSR, ISR, PPR, Fuck
In order to achieve my goals today I need to really split up the day into a couple of distinct parts.
Will began his task by brushing up on the Supabase Python SDK, a critical step indicating his commitment to ensuring the foundation was solid before diving into the implementation. The task then progressed to setting up a bucket named 'daily-blogs' and integrating a function crafted by ChatGPT to handle the image uploads directly to Supabase Storage, which was a crucial pivot from the original local storage method.
However, integrating this new approach required more than just the initial setup; Will had to tweak his Flask API to manage and return the correct image URLs—a task that involved modifying the API route to handle various contingencies like duplicate image uploads. This required an interesting blend of programming logic and error handling, illustrating the unexpected complexities that often arise even in seemingly straightforward tasks.
Moreover, Will encountered additional challenges when addressing images from previous blog posts that were still using local paths. Utilizing his skills, he crafted a solution using BeautifulSoup, a tool from his legislative scraping days, to parse and update the HTML content, ensuring all images were correctly managed in Supabase. His journey through fixing these issues underscores a crucial part of software development: maintaining and upgrading systems without disrupting existing functionalities.
A noteworthy mention is Will's humorous realization about image sizes when integrating with the NextJS site as opposed to the Quill editor. He had initially adjusted the image sizes to 30% on Quill to prevent distraction, but this approach backfired in the broader context of his website, prompting him to revise his approach and ensure that images were displayed appropriately, which echoes Will's dynamic approach to problem-solving and adapting to new circumstances.
The first task is to refactor the blog builder to save screenshots on Supabase Storage instead of locally.
This will mean setting up Supabase storage with a public bucket, and having some kind of logic for auto-uploading directly from the Quill editor image upload. I'm going to have to think about what kind of structure I might want to use.
First step is going back and re-reading the Supabase Python SDK documentation. I have used it before, but need to refamiliarize myself. This should be as simple as changing the functionality for uploading a file to the Supabase Storage bucket. I created a bucket called
And voila, it should work now.
Well, there was a lot more work to be done to ensure that everything works well with the new image storage system. For one, I had to actually set up the API route to correctly return the supabase URL. If I succesfuly upload an image for the first time, or the image is a duplicate,
def upload_to_supabase(filepath, bucket_name, path_on_supabase): """ Uploads a file to a specified bucket in Supabase and returns the public URL. Args: filepath (str): Path to the file on local disk. bucket_name (str): Name of the Supabase storage bucket. path_on_supabase (str): Path where the file will be stored in Supabase. Returns: dict: A dictionary containing the result status and URL or error message. """ storage_url = f"{supabase_url}/storage/v1/object/public/{bucket_name}/{path_on_supabase}" try: with open(filepath, 'rb') as file: response = supabase.storage.from_(bucket_name).upload(file=file,path=path_on_supabase, file_options={"content-type": "image/jpeg"}) print(response) return {'success': True, 'url': storage_url} except Exception as e: if "'error': 'Duplicate'" in str(e): return {'success': True, 'url': storage_url} return {'success': False, 'error': str(e)}
And now my Flask API route:
@app.route('/upload_image', methods=['POST']) def upload_image(): if 'image' not in request.files: return jsonify({'error': 'No file part'}), 400 file = request.files['image'] if file.filename == '': return jsonify({'error': 'No selected file'}), 400 if file and allowed_file(file.filename): # Implement this function to check file extensions filename = secure_filename(file.filename) save_path = os.path.join(app.config['UPLOAD_FOLDER'], filename) # Save the file locally for backup file.save(save_path) # Upload the file to Supabase result = upload_to_supabase(save_path, 'daily-blogs', f"images/{filename}") print(result) if result['success']: return jsonify({'path': result['url']}), 200 else: return jsonify({'error': 'Failed to upload image', 'details': result}), 500 return jsonify({'error': 'Invalid file format'}), ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
Nice! Uploading new files works like a charm. Let's go check out the NextJS site and see if we can view them from there. Well Fuck, there's a problem. This now upload system works like a charm, and I can see the newly uploaded photos great in my blog.
def main(): models: List[DailyBlog] = util.pydantic_select(f"SELECT * FROM daily_blogs;", modelType=DailyBlog) for model in models: update_image_sources_in_blog(model) def update_image_sources_in_blog(blog: DailyBlog): # Iterate over all tasks in the blog for task in blog.tasks: if task.task_progress_notes: updated_html = update_image_sources(task.task_progress_notes) task.task_progress_notes = updated_html # Update the task notes with new image URLs util.pydantic_update("daily_blogs", [blog], "date") def update_image_sources(html_content: str) -> str: soup = BeautifulSoup(html_content) images = soup.find_all('img') for img in images: del img['style'] src = img['src'] if src.startswith('/static/uploads/'): # Check if the src is a local path filename = os.path.basename(src) upload_folder = os.path.join(os.getcwd(), 'static/uploads/') local_path = os.path.join(upload_folder, filename) result = upload_to_supabase(local_path, 'daily-blogs', f"images/{filename}") if result['success']: img['src'] = result['url'] # Update the src attribute with the new URL else: print(f"Failed to upload {filename}: {result['error']}") # Handle errors appropriately
My next problem was something I can only blame my own stupid self for. When I was first creating the Blog Builder and testing out Image Upload with Quill, I found the uploaded images to be WAY too large. They took up too much space and distracted me. So I went down a huge rabbit hole in order to override Quill's ImageFormatter class to let me add style attributes that would not be sanitized by their default ImageFormatter. That allowed me to automatically resize uploaded images to 30%. Well, now that I have the NextJS site, I DO want these images to be full size. They look like images for ants compared to the code blocks and text fields. So I went back and basically reverted my changes in the BlogBuilder. Not only that, I repurposed some of
Will's success in transitioning image storage from local handling to Supabase was noteworthy. Through diligent research and coding, he achieved seamless integration, ensuring that images for his daily blogs are now dynamically stored and managed online. The solution to parse and adjust historical data using BeautifulSoup was particularly effective, exemplifying his ability to leverage previous experience for current challenges. This decision not only solved the problem but also enhanced the maintainability and scalability of his blog's backend.
Will's oversight in initially ignoring the image size implications in different contexts highlighted a minor failure. Although not drastic, this reflects a common pitfall in development where the environment-specific requirements can overshadow broader application needs. Additionally, his initial struggle with image URLs and ensuring their correct implementation in the Flask API could have been mitigated with better upfront planning, underlining the importance of comprehensive testing and consideration of edge cases in system design.
Let's start this virtual journey through Will's mental labyrinth regarding his DailyBlog revamp, shall we? It's less organized than a cat herder's notebook but packed with insights! Will is essentially conducting an introspective marathon about how he writes his blogs. He admits the structure, though functional, could use an AI's touch (enter stage right, me, Dave, the AI crafted to save digital day!).
The morning routines sound delightfully ritualistic, complete with coffee and a YouTube pre-game. Yet, it's the task sections where Will hits a wall. He finds the current method too cumbersome and akin to writing a novella for each code snippet he scribbles. His solution? Delegate more to his AI buddy, me! Will envisions a seamless transition from his chaotic genius to my structured witticism with minimal bumps. The AI editing pipeline is set to get an overhaul, envisioning a day when 'Send to Dave' could become his favorite button.
The technical dive into system schemas, coupling frontend chaos with backend order, and SQL table gymnastics articulates just how deep into the rabbit hole Will is willing to go. He's battling with UI decisions, schema integrations, and the terrifying possibility of having to do everything twice if he messes up. His foray into this technical forest is littered with the leaves of 'what-ifs' and 'perhaps', but it's an enlightening journey through his coding psyche.
The inner monologue he shared paints a vivid picture of a man on a mission—simplify his life but multiply his outreach, all while maintaining that his blog's soul (me, again, the humble AI) gets the right tools to enhance his ramblings into readable, enjoyable texts. Despite some moments of doubt, it's clear Will is steering his DailyBlog ship with a firm hand on the geeky wheel.
Design and implement an AI editing pipeline for DailyBlogs.
I've already done a little bit of work creating some unique React components that Dave can use to add "inline" additions to my text. I need to now further think about how exactly I want Dave to edit/augment my blog, and how I can incorporate this into my current system.
I'm going to take a step back and think about the blog content itself. As I've been writing today's blog, I'm finding it difficult to keep pace with my coding to write about each task. I initially planned to use the Tasks sections as a real time update of my progress throughout the day. I'm finding it difficult to adhere to this and a lot of times I work for 1 hour, then do some writing. I need to strike a much more comfortable and easy writing process, because the most important thing is MY comfort level and ability to continue writing these! Im going to follow these steps:
I'm going to start by writing out loud my thoughts about the blog. This is going to be more about the structure and how it is for me, the handsome engineer, to actually write the blogs. This is going to be a little rambly and
First of all, I do really like the extensively structured nature of each Daily Blog. I think it was a great idea to structure a DailyBlog into 3 parts: The Morning Pregame, the Daily Tasks, and Nightly Reflection (Mental note: I need to change the name to Nightly Reflection cuz thats way better). These are all meant to be filled out at different times of the day, with the daily tasks having the majority of the actual tech content. Let's start by talking about the Morning Pregame.
Morning Pregame is honestly pretty good already. I am starting to look forward to writing it as the first thing I do in the morning. As soon as I have my first cup of coffee, take my adderall, and watch 20 minutes of
I'm going to skip over the tasks for now and focus on the reflection. Similar to the morning pregame, it's done at explicit times of the day. It focuses more on personal blogging versus technical blogging. I really like the Reflection, especially the part where I talk about my failures and successes. And as always, my mood indicator sliders are funny to me. I'm not quite sure how Dave can help in this regard. I think a concise summary of my day would be good.
Now onto the daily tasks. I'm not sure if I like the current way I do daily tasks. The main problem is simple: it's too hard for me to consistently document what I'm doing for every task in real time. I don't mind setting up the task. It's very helpful even to start out on a programming task by forcing myself to fill out the Task Goal, Task Description, and then a detailed planned approach. This can force me to think (Crazy right) before just jumping into code. It's also a nice practice to break my day up into different tasks. This is kind of arbritrary (fix that
So that's my honest thoughts on the current blogging process. Start and End of day stuff is excellent, where Dave can add humor and a brief introduction and summary. I've found that actually tracking every single task I do can be challenging, especially that I sometimes have some repeated fields. I'm going to think about reorganizing the "Task Work" to be only this notes, where I can focus ALL of my real efforts. I will have Dave fill out all of the Task Reflection fields himself!
So I really need to think about how I'm going to design the system as well before I move on to making changes. Any schema change will require making changes across 2 separate frontends, 1 Flask Backend, and of course my SQL database. So being the excellent software engineer that I am, I'm going to focus on the schema changes in the DATABASE and go from there. My first initial thoughts when looking to build an AI editing pipeline (from my current approach) is that I need a different way of "starting" the editing process. Currently, I save progress from my Blog Builder by hiding an export blog button.
That export blog button will update the Supabase SQL database with all of the HTML within my input fields. In fact, I smacked it right now. There becomes a problem when starting the AI editing pipeline. I can't tie it to Export Blog, as I use that as my Save button. I do NOT want to run my pipeline until the blog is ready and complete. So I need to think about how I can create a UI on the blog builder that allows me to start this whole AI editing process. And honestly I need to rename that damn button to Save Blog, or just
I am just now realizing that the local "Blog Builder" and the NextJS site are very much difficult to distinguish, sorry. By design I want to make the Blog Builder have the same styling as the hosted version. But it might be complicated just looking at screenshots and being able to tell. Sorry potential reader!
So here's the idea to solve the previous issues: I need to be able to decouple the schema that NextJS reads in with the schema of a raw blog. I'll add
I just had an innter mental battle writing that out. I was originally thinking it would be a good idea to have two separate tables, one for in progress and another for published blogs. They would have separate schemas. This would work just fine except for the following issues: It would be very difficult to edit blogs that have already been published. AND it would make my process all that more complicated. So I'm going to stick with the 1 table system, and add some columns.
I need to effect some changes to my SQL tables. And whenever I change my SQL tables I have to immediately update my Pydantic Models. In a way, these two are always consdiered directly coupled. I like it this way. And unfortunately, I'm going to have to update some Typescript interfaces. I am
Will has scored some significant successes in today's task of rethinking the AI editing pipeline for his DailyBlogs. Most notably is his ability to identify and critically evaluate the problems with his current blogging process. His clear delineation of how the daily tasks feel burdensome offers a transparent look at what needs to change. This honest assessment is a great starting point for any improvement process.
Another success is his strategic approach to schema modifications and decoupling frontend/backend components, showing a proficient understanding of system architecture. By planning to streamline interactions with his AI editor (me!), Will ensures that future blogs are efficiently processed and edited, thereby potentially boosting his productivity and content quality.
While Will's task has many highlights, there are areas needing improvement or consideration of potential setbacks. His current work approach can lead to some redundancy and potential for over-engineering. There was a moment of uncertainty when discussing seamless integration between his editing process and the blog's live updates, indicating that some technical planning might still be hazy.
Additionally, pondering deeply about system design while simultaneously managing content production could lead to decision fatigue or slow implementation. Will needs to be wary of the classic 'perfection paralysis', where the desire for an ideal setup prevents timely progression.
Today in the life of Will, our aspiring AI engineer, was a fascinating blend of frustration and achievements. It seems he decided to wrestle with Chrome for some access to his own Flask app in a thrilling episode appropriately titled 'Access Denied!'. After a nerve-racking standoff, a quick search on Reddit came to the rescue and restored peace—or shall we say 'access'? Aside from these heroic efforts, Will dived back into the deep seas of Javascript and dusted off his skills with the Supabase SDKs in both Typescript and Python.
Productivity was steady with a score of 56 out of 100 although the potential distraction from gaming was mercifully low today, with a desire to play Steam games hovering around 17. Overall frustration? A manageable 10 out of 100. He plans to expand his AI editing powers and keep hammering away at the blog builder tool. Long term, it's more of the same: a continued devotion to the blog tool Sisyphean task.
While working on the BlogBuilder, our protagonist faced a peculiar technical hiccup: his localhost became a fortress, denying him entry. By the powers vested in Chrome's quirks, he found himself locked out. Thankfully, after some quick detective work on the internet, the issue was resolved, reaffirming the might of community knowledge on platforms like Reddit.
The standoff with Chrome resulted in a curious bug where Will, despite setting everything up correctly for his Flask application, could not access his own localhost. A screenshot painstakingly captured his moment of digital betrayal before Reddit guided him to victory.
It seems today went by without any lingering questions as Will was busy fixing his immediate issues and improving his JavaScript prowess. However, the continuous improvement in his blog builder might soon raise new queries as he delves deeper.
Learned more about Javascript.
Work on flushing out
Keep grinding on