Hello, my name is Vishisht Seku. This is my Blog Post related to subject CS-443(Software Quality Assurance & Testing.) I am very glad to be a part of this class and learn more about the process of Software Quality Management.
-
One part of LibreFoodPantry that I found especially interesting was its mission to connect computer science education with real humanitarian impact. Rather than treating software development as an abstract or purely technical exercise, LibreFoodPantry shows how code can directly support local food pantries and the people who rely on them. The idea that free and open-source software can be adapted to meet the needs of different communities stood out to me, because it emphasizes flexibility, accessibility, and long-term usefulness instead of profit.
I chose to write about this mission because it reframes how I think about computing as a field. As a senior at Worcester State University, much of my coursework has focused on efficiency, correctness, and performance. While those skills are important, LibreFoodPantry highlights another dimension of computer science: responsibility to society. By involving students and faculty in instructor-led, open-source projects, the organization creates a learning environment where technical skills are developed alongside empathy and civic awareness. This approach makes the work feel meaningful and shows that software can be a practical tool for social good, not just a career skill or academic requirement.
The one thing that I thoroughly enjoyed reading about Thea’s Pantry was the Developer Documentation section. As an aspiring Software and Data Engineer this documentation was a revelation and a textbook. I liked the sections around ‘Code of Conduct’ and ‘Inclusive Language’ a lot. They inspire us to take a moral high ground and be kind and compassionate to our team members.
I also enjoyed the technical sections that gave me a good grasp on managing Workflows using the Git Branch, Pull Request protocol. Other sections such as ‘Release Process’ and ‘Pipelines’ provide a rigid framework around the intended CI/CD Process. Regarding ‘Dependency Management’ I was surprised to find differences between ‘Dependencies’, ‘Dev Dependencies’ and ‘Peer Dependencies’. So far, I only assumed we need to manage ‘Dependencies’ as a single unit.
I am very glad to be part of this Capstone Project with a noble mission.
-
The recent work assigned to us regarding REST API services has been very enlightening to me. I have found that REST API’s form the heart of Microservice architecture as well. REST API’s help Software Engineering Teams to break monolith Software application code into modular, atomic and maintainable code services. To explore this further, I read a blog post titled “Designing RESTful APIs for Microservices Architecture” https://blog.xapihub.io/2024/04/17/Designing-RESTful-APIs-for-Microservices-Architecture.html from XapiHub. This post explains how REST API design plays a critical role in making microservices effective, reliable, and loosely coupled. Since we are learning how clean design and good processes improve software quality, I felt this resource fit directly into our course themes.
The blog begins by explaining that microservices rely heavily on APIs to communicate with each other. REST is the most common choice because it is simple, stateless, and works well across different services and technologies. The author highlights several best practices such as using clear resource-based URLs, handling errors consistently, applying versioning to avoid breaking clients, and building truly independent services that do not secretly depend on each other’s data models. The blog also emphasizes monitoring, documentation, and strong authentication practices, which are especially important when many services are talking to each other across the network.
I selected this resource because I found it very simple with lot of simple to understand diagrams and easier to understand language. I have always found microservices interesting but a little overwhelming. When people talk about dozens or hundreds of services communicating through REST, I sometimes imagine chaos. This blog made the topic feel more understandable by breaking it into practical design rules. Since our class focuses on professional processes, documentation, and maintainability, learning how microservices stay organized through REST guidelines felt very relevant. Also, I know many modern data-engineering and enterprise systems use microservices—exactly the direction I want to grow my career in.
One of the biggest things I learned is how important statelessness is. I always assumed microservices were powerful mainly because of their independence, but I didn’t realize how much stateful communication can ruin scalability. Another takeaway was API versioning. Before this, I didn’t think deeply about how older clients might break when an API changes. The blog’s explanation helped me understand why backward compatibility is a core part of good software process management.
It appears that the path ahead to be a successful Software Engineer is overwhelming with so much to comprehend and implement. We have to be good not just at programming languages but also such principles of effective Application Engineering. This blog helped me connect industry best practices to what we’ve been learning in class about writing clean, maintainable, and well-designed software systems.
-
I am thoroughly enjoying the classes on writing clean code under the aegis of Professor Al-Faris in Software Process Management course. While looking for more material outside class, I found a blog post titled “How to Write Clean Java Code: Best Practices” on Digma’s website https://digma.ai/clean-code-java/. Since we work a lot with Java especially in our design patterns (Software Construction Design and Architecture under Professor Wurst) and project classes—I felt this blog would be perfect to learn some practical ideas on writing cleaner code. It directly connects to the course concepts of good processes, readability, and long-term maintainability.
Summary of the Blog Post
The blog highlights several clean-code principles specifically focusing on Java. It talks about using meaningful names, keeping methods small, breaking down complex logic, and making classes more focused on single responsibilities. It also covers organizing projects so that packages and modules make sense. Other topics include avoiding unnecessary comments, choosing good formatting, and using tools like linters and static analysis to keep code quality consistent across a team. The blog also mentions the importance of avoiding side effects and making code more predictable so that debugging becomes easier.
Why I Picked This Resource
I picked this blog mainly because I sometimes struggle with writing code that is easy for other people to read. In group projects, especially in CS-343 and other classes, I’ve noticed that even when my code works, people ask me to rewrite parts to make it more understandable. Since the course emphasizes professional development and communication skills, I wanted to learn more about how to write code that doesn’t confuse future developers—including a future version of myself. Also, I’m working with Java in my Android and backend mini projects, so the resource felt immediately relevant.
What I Learned and My Reflections
One thing that really affected me was the section on method size and single responsibility. I realized I often cram too much logic into one method because it feels “efficient” while writing it. But the blog explains that smaller, focused methods improve readability and make testing easier. Another important point was naming. I always knew naming mattered, but the blog explained it in a more practical way—names should communicate intent, not just function.
I also learned about using automated tools to maintain code quality. Honestly, I never used static analyzers or formatters seriously, but now I see how they support good software processes, which we discuss a lot in this class. Clean code isn’t just a personal style; it’s part of a team’s workflow and long-term sustainability.
How I Will Apply This Going Forward
Going forward, I plan to be more disciplined about breaking up my functions, using stronger naming, and removing unnecessary comments. I also want to integrate automated formatting and code-quality tools in my personal projects, so I get used to writing cleaner code consistently.
-
While going through different resources on API Development, I came across a blog post from Stoplight titled “API Design Patterns for REST Web Services.” Stoplight Blog – API Design Patterns for REST Web Services. I decided to write about this one because it connects directly with what we’ve been discussing in class about Design Patterns for REST API’s under the aegis of Professor Wurst. I’ve always wanted to understand how professional developers make APIs that are easy to use and extend, and this article gave me a clearer picture of what good API design actually looks like.
The post starts by explaining that REST APIs are not just about connecting endpoints — they’re about defining patterns that make services predictable and consistent. It discusses important ideas such as resource-based design, the use of HTTP methods for specific actions (like GET, POST, PUT, DELETE), and how to organize URIs in a way that feels natural to users. It also highlights how consistent naming conventions and status codes can make APIs more reliable for anyone consuming them. One interesting point the author makes is that designing an API is a lot like designing a user interface, except the “user” is another program.
I chose this blog because it goes beyond the basics of how to make an API work and instead focuses on how to make one well-designed. The examples were simple but realistic, and the explanations didn’t feel overly academic or abstract. I could see how these design decisions connect with what we’ve learned about modularity, abstraction, and communication protocols in software process management.
Reading this made me realize that design patterns in REST APIs are as important as design patterns in object-oriented programming. They both aim to make software easier to understand and maintain. For example, I learned about the “collection pattern,” where a resource like /users represents a group of items, and /users/{id} represents an individual. This pattern keeps the structure clean and predictable. I also learned how misuse of HTTP verbs (like using POST for everything) can make an API confusing or harder to scale later.
Personally, this article helped me connect classroom theory with real-world engineering practice. It changed how I think about building web systems — not just as code that works, but as something other people will depend on. In my future projects, especially my capstone work and the “Investment Wisdom” app I’m building, I plan to use these principles when designing backend endpoints. I’ll pay attention to consistent URIs, proper status codes, and resource naming so that my APIs feel professional and well-structured.
Overall, this blog was a great reminder that clarity and consistency matter as much as functionality. It showed me that designing a REST API is an act of communication — between developers, systems, and even future maintainers.
-
The recent learnings related to Scrum and Agile process under the aegis of Professor Al-Faris has been very enlightening. I have also been trying to look up for new learnings online. While reading through several Agile blogs, I came across an article by Mike Cohn titled “How to Coach Your Team to Run a Daily Scrum Meeting When You Cannot Attend” on Mountain Goat Software. It immediately caught my attention because it deals with a real-world challenge that I have actually seen in group projects — what happens when the person who usually runs the meeting cannot make it. The post discusses how a Scrum Master or team lead can help their team stay organized and independent enough to run daily Scrum meetings even without them being there.
Cohn’s main point is that the Scrum Master’s goal should not be to control every meeting, but to coach the team until they can manage those sessions on their own. He explains that when a team always relies on one person to start and run the daily Scrum, they become dependent. The healthier approach is to let the team take turns facilitating, learn to keep time, and handle updates themselves. He suggests that leaders should model good habits at first — like staying on topic, focusing on progress, and keeping it short — but gradually step back so that team members feel responsible for running the meeting.
I chose this article because I can relate to it from my experience in software engineering classes where teamwork can be uneven. Sometimes one or two people end up organizing everything, while others stay quiet. This article showed me that real professional teams face the same issues, and that strong teams are the ones where everyone learns to self-organize. I also liked how Cohn keeps his advice practical — he doesn’t overcomplicate the process, but focuses on people learning through consistent, small improvements.
Reading this blog reminded me why learning about Scrum in class is not just about memorizing roles or ceremonies. It’s about building habits of communication and accountability. As a computer science student, I used to think Agile was mainly for project managers, but this article helped me see how every developer plays a part in maintaining the team’s rhythm. Even if I’m just a developer, I can help keep meetings focused or volunteer to run one. That kind of initiative builds confidence and makes the team stronger.
Resource: Mike Cohn, How to Coach Your Team to Run a Daily Scrum Meeting When You Cannot Attend
-
This week, I read a Medium article called “Solving Everyday Problems: Essential Java Design Patterns You Need to Know” by Mina.
Link to the articleThe post talks about how software developers use design patterns to handle common problems that come up while coding. It focuses on Java and explains how certain patterns, such as Singleton, Factory, Strategy, and Observer, help make programs cleaner, easier to update, and more reliable. Mina keeps the explanations simple and shows how each pattern can be used in everyday projects.
Why I Picked This Post
I chose this article because our course covers software design and programming in Java, and I wanted to better understand how professionals write well-structured code. Also the recent Homework assignments compelled me to explore for more knowledge on this complex topic. I wanted to learn how to organize my code better and avoid duplication. This post caught my attention because it explains the practical side of design patterns and connects them directly to real situations that developers face.
Summary of the Blog
The author starts by explaining what design patterns are — simple, reusable solutions for problems that appear often in programming. The patterns are grouped into three main categories: Creational, Structural, and Behavioral.
- Creational patterns (like Singleton and Factory) focus on how objects are created.
- Structural patterns deal with how classes and objects work together.
- Behavioral patterns handle how objects communicate with each other.
Mina gives short, clear Java examples for each type. For example, the Singleton Pattern ensures only one object of a class is created, which is useful for things like database connections. The Factory Pattern helps manage object creation without hardcoding specific classes. The Strategy Pattern allows different algorithms to be swapped easily, and the Observer Pattern is used when one object needs to respond to changes in another (like event listeners in Java).
What I Learned and How I’ll Use It
Reading this post helped me understand that design patterns aren’t just theory from textbooks. They are habits that experienced developers use to write smarter code. I learned how using the right pattern can save time and make programs easier to change later.
For example, in my Investment Wisdom Android app which I did as part of Operating Systems course, I can use the Factory Pattern to manage different types of investment data without repeating code. I also now understand the Observer Pattern better, which explains how buttons and screens update automatically in Android. These ideas will help me make my code cleaner and more organized.
Conclusion
This article gave me a clearer picture of how Java design patterns improve the way programs are built. It connects directly to what we are currently learning under the aegis of Professor Wurst. Going forward, I plan to use these patterns more often so that my programs are easier to test, expand, and understand.
Tags:
CS@Worcester,CS-343,Week-8 -
In computer science, we are expected to keep learning as new technologies appear and to communicate our ideas clearly. These two skills—continuous learning and effective communication—are part of the program goals for our course. I recently listened to a podcast that fits both goals perfectly because it explains how people at GitHub use teamwork and artificial intelligence to make software development more productive.
Why I Chose This Podcast
The episode I listened to is called “How GitHub Operationalizes AI for Teamwide Collaboration and Productivity” from the SuperDataScience podcast, hosted on superdatascience.com. In this episode, the guest, Kyle Daigle, who is GitHub’s Chief Operating Officer, talks about how GitHub uses tools like GitHub Copilot to improve how teams work together. I picked this episode because I use GitHub for class projects and wanted to learn how professionals collaborate on a much larger scale. It also connects to our course topics on teamwork, version control, and communication.
What the Podcast Is About
The discussion focuses on how artificial intelligence can support developers instead of replacing them. Kyle explains that Copilot helps write code faster and gives smart suggestions during coding. But the main idea is about collaboration, not just automation.
He describes something called inner sourcing, which means using the same open-source principles inside a company. Teams share their work, review each other’s code, and reuse components just like developers do in public GitHub projects. This approach helps people across departments communicate better and learn from one another. The podcast also highlights the importance of keeping a healthy culture where AI assists but humans make the final decisions.
What I Learned and How I’ll Use It
This episode changed the way I think about tools like GitHub Copilot. Before listening, I thought of it only as a shortcut for writing code. Now I understand it can help a whole team by keeping code consistent and by making collaboration smoother. I also liked the idea that collaboration depends as much on people and process as it does on software.
In future group projects or internships, I plan to encourage my team to:
• Use version control tools like GitHub for all projects, even small ones.
• Review each other’s work openly instead of working in isolation.
• Try using AI tools like Copilot responsibly to help maintain consistency.
This podcast helped me see that good collaboration is both a technical skill and a communication skill. Learning from professionals at GitHub reminded me that teamwork is what keeps software projects successful and developers constantly growing.
Podcast link: SuperDataScience – SDS 730: How GitHub Operationalizes AI for Teamwide Collaboration and Productivity (https://www.superdatascience.com/podcast/sds-730-how-github-operationalizes-ai-for-teamwide-collaboration-and-productivity-with-github-coo-kyle-daigle)
-
The tasks assigned related to Version Management in particular using Git and GitHub were very valuable to my professional journey as an aspiring Software Engineer. However, what surprised me was that a majority of the version management articles were written about managing code written in Application Programming languages such as Java, Python, NodeJS (Java Script) e.t.c.. In this blog I would like to focus on an area in Version Management that is often ignored. I have picked up an interesting Podcast which focusses on Version Control for Databases. Not many people realize the power of Database programming such as Stored Procedures, Functions, Packages, Triggers apart from DDL scripts for objects such as Tables, Views, Materialized Views e.t.c..
In the Postgres.fm podcast episode (https://postgres.fm/episodes/version-control-for-databases) “Version Control for Databases,” hosts Nikolay and Michael explore one of the trickiest challenges in software development—keeping a database’s structure under proper version control. Unlike source code, which fits neatly into systems like Git, databases hold both data and schema that constantly evolve. This discussion opened my eyes to how developers balance reliability with the need to iterate quickly in production environments.
The episode begins by comparing database versioning to traditional software versioning. The hosts note that while code can be reverted with a single Git command, databases contain stateful information that can’t simply be rolled back without consequences. They talk about migrations, tools such as Flyway and Liquibase, and why automated migrations are safer than manual SQL edits. The key takeaway is that treating schema changes as first-class citizens in the development process ensures consistency across environments—from a developer’s laptop to staging and production.
One point that really resonated with me was when the hosts mentioned the risk of “drift”—the gradual divergence between what’s defined in version control and what actually exists in production. I’ve seen this happen in group projects when one teammate updates a table locally but forgets to share the migration script. The podcast offered practical strategies to avoid that: make migrations mandatory, run them through CI pipelines, and ensure rollback scripts are tested just like forward ones.
From a personal standpoint, this episode changed how I think about teamwork and database safety. I learned that version control is not just about tracking lines of code—it’s about ensuring that every system component, including data models, can be reproduced and audited. Going forward, I plan to incorporate database migrations into my Git workflow, even for small personal projects.
To conclude, I would like to thank you for patiently reading my blogpost and encouraging me in my journey to be a good Data Engineer.
URL for Podcast : https://postgres.fm/episodes/version-control-for-databases
-
Introduction
In computing, professional growth relies on continuous learning well beyond the classroom. With technologies evolving rapidly, it is necessary to explore resources that strengthen both technical and communication skills. This course emphasizes those goals by focusing on two outcomes: mastering emerging methods and expressing ideas clearly in writing and speech. In this post, I share how a podcast on software process management supported these outcomes and expanded my understanding of collaboration in development work.
Resource Summary and Selection
The resource I selected is a podcast episode discussing effective practices for managing software teams and improving project flow. You can find it here: https://softwareengineeringdaily.com/2025/08/19/empowering-cross-functional-product-teams-with-tobias-dunn-krahn-and-doug-peete/
I chose this episode because podcasts often present material in a conversational and practical way. Unlike written posts, the dialogue between speakers adds context and tone, which made the discussion of process management easier to connect with.
Reflections and Key Takeaways
The episode highlighted how adaptability is essential in project management. Agile and DevOps were described as tools to handle change, reduce delays, and ensure quality. What struck me most was the emphasis on communication. The speakers described how gaps between developers and stakeholders often caused setbacks, and how simple practices like frequent check-ins or retrospectives helped overcome them. This encouraged me to think about how I might use these approaches in my own work to foster teamwork and reduce friction.
Podcasts vs. Blogs in Process Management
Blogs and podcasts each bring unique strengths to learning. Blogs are useful for detailed explanations, visual aids, and structured references—making them ideal for revisiting frameworks or technical steps. Podcasts, on the other hand, offer a dynamic experience. Hearing professionals speak candidly about challenges and decisions provides insight into the human side of project work that text may not fully capture. For process management, where collaboration and decision-making matter as much as tools, podcasts often relay those nuances more effectively.
I believe the two formats complement one another: blogs provide permanence and technical clarity, while podcasts make abstract practices more relatable. Together, they offer a balanced approach to professional development.
Conclusion
Reflecting on this podcast reminded me of the value of lifelong learning in computing. Adopting new models and improving communication are essential for success, and both blogs and podcasts serve as valuable resources for that growth. Moving forward, I plan to engage with both formats to build a deeper understanding of software practices and strengthen my professional toolkit.