Skip to main content

Posts

Showing posts from 2024

docker: storage and services

Docker Volume Without going through Docker, accessing files within a Docker image is not easy. Therefore, Docker has a volume feature that allows users to specify the location on the host file system to serve as the access directory for Docker. In other words, a volume is the NFS of a Docker image. Example: Using a volume to store SQLite files. ref:  SQLite for NodeJS: https://www.sqlitetutorial.net/sqlite-nodejs/ Initialize the project. # Setup ExpressJS framework express --view=pug 240324_ejs_sqlite_docker npm init npm install express npm install sqlite3 Test: npm start Initialize docker  Since the directory already contains NodeJS, the Docker Daemon will assist in the setup. Create new docker image docker build -t atfuture7/sqlite01 . Create a folder for Docker, create a container, run it.  mkdir docker_vol docker run -p 3000:3000 -v ./docker_vol:/data --name exp_sqlite atfuture7/sqlite01 After confirming that the container can run correctly, stop the container a

Docker, Virtual-Desktop and build-docker-image

Previously, my work platform involved using development, testing, and production environments. Docker is relatively new to me. As I also held a system administrator role, the characterization of Docker as "lightweight and simple" by supporters in the programming development community left me puzzled for a while. Here's a brief explanation of the differences between VMs, Docker, and Virtual Desktops as I understand them. The platforms most people commonly interact with are standalone systems, which typically include Windows, Mac, Android, and iOS. In the early Unix era, the system architecture consisted of a Host and Terminals. Terminals are lightweight systems with basic functions, which can also be seen today in devices like the Raspberry Pi. When work requires several systems to perform different tasks, having a separate hardware device for each can be wasteful. This led to the development of Hypervisor and VM (Virtual Machine) architectures. Sometimes institutions ne

Review old/new stuff: Angular 17

When I reviewed Angular recently, I discovered that it had just undergone a major revision three months ago, v17. The notes here are summaries of common features incorporated into v17. Install Angular ref: https://angular.io/guide/setup-local npm install -g @angular/cli Create new project ng new proj_name cd proj_name If the old project needs to be migrated to v17 control flow: ref: https://youtu.be/36Hcx7kRYDg?si=pmZcRkQHIlmguK3K ng generate @angular/core:control-flow Activate  ng serve Data Flow ref: https://angular.io/tutorial/first-app/first-app-lesson-09 Service can be regarded as a custom class (similar to custom modules written in expressJS) ng generate service obj_name In the sample program, enumeration of array is used. this.housingLocationList.find(housingLocation => housingLocation.id === id); The concept is  find( function(element) => { return (condition; )}) ref:  https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/fi

expressJS and nodeJS module

 While refreshing my skill set, I became acquainted with the ExpressJS framework. Different frameworks share similar fundamental concepts. Here, I document the development process starting from the basic initial structure. 1. Install basic expressJS project npm install express-generator express --view=jade 2403_express_sampleexpress_sample cd 2403_express_sampleexpress_sample npm install express At this point, you should have the basic structure of ExpressJS set up. Next, install Mongoose, which is used with MongoDB. npm install mongoose Conducted the first test. npm run start ( Ctrl+c to stop ) 2. Checking the ExpressJS structure 2-1. The startup file for ExpressJS. In ./package.json, find "scripts":{}. Among them, "start" records the startup file. Add another startup file: ”dev": "node ./test/loc". Then create loc.js file on the corresponding file path. This option is for testing, the way to run test:   npm run dev 2-2. Find the port

Review old stuff: mongoDB

blockquote Recently, I revisited old skills, extending to MongoDB. After reinstalling MongoDB, I felt that the website architecture had significantly changed compared to what I saw the last time I created a sample. My old sample project was a record of main characters from the Japanese anime "Demon Slayer." In just four years, from March 2020 to now, there has been a significant change in the website structure, indicating considerable improvements in MongoDB's functionality. Resource Some content has been spread out, but I find this arrangement clearer. Below are the resources I briefly looked up: community version:  https://www.mongodb.com/try/download/community-edition MongoDB shell: https://www.mongodb.com/try/download/shell Database tools:  https://www.mongodb.com/try/download/bi-connector   ( dump, restore, import, export)  NodeJS reference  (developer): https://www.mongodb.com/docs/drivers/node/current/quick-reference/ mongosh manual:  https://www.mongodb.com/docs

Review old stuff: js. and ts

NodeJS, Flask, Angular, React are not new to me. Given the researchers' recommendation to train students on modern industrial processes, and my aim to develop a more user-friendly UI for AI testing, it's prudent for me to refresh my skills in certain familiar areas that I haven't utilized recently. Setting up a development environment is a key step in this process.  One more step to run TypeScript without automation: Initialize: tsc -init  Compile .ts to .js: tsc source_file.ts Reference: TypeScript:  https://www.typescriptlang.org/   JavaScript:  https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference

Reflections on course 'Generative AI with Large Language Models'

 course: Generative AI with Large Language Models This course is presented by Andrew Ng as the introductory speaker and mainly consists of lectures by AWS engineers who explain various current theories and three practical courses. Week 1: introduction of LLMs and project lifecycle  The course introduces the architecture of Large Language Models (LLMs), their conceptual framework, and methods of expansion. It includes practical sessions demonstrating how to load AI and use extension datasets. The lectures cover the initial pre-training phase of AI and discuss the impact of domains on both the overall system and specialized aspects. Week 2: Fine-Tuning The course explains how AI, after initial training, can be further expanded to enhance areas required by users. It introduces examples of Fine-Tuning and includes a practical session demonstrating the extension training and integration of AI. Week 3: Reinforcement learning from human feedback (RLHF)  The course discusses the social respons

Guide to Preserving HuggingFace Models in Google Colab Environments

Conclusion:  Step 1:  find the model path: ls ~/.cache  Step 2:  Copy the entire folder to Google Drive:  Step 3:  Set model path to the subfolder under snapshot: My Story: I initially began exploring Generative AI (GAI) and Google Colab through Stable Diffusion. In the past, as I mainly wrote server services and console applications, I was less familiar with data science modes like R and Jupyter that can maintain a paused state. I didn't quite understand the heavy burden on Colab of creating a temporary Stable Diffusion WebUI with .ipynb, as suggested by popular guides. I just found it troublesome that connections often took a long time and then dropped, requiring a restart. Recently, while testing new versions of the Stable Diffusion model, and facing challenges due to Colab's policies making various versions of WebUI difficult to run successfully, I started researching how to write my own test programs in Colab. Eventually, I understood that Colab is essentially a VM, capabl

Reflections on course 'Generative AI for Everyone'

Course: Generative AI for Everyone The course "Generative AI for Everyone" discusses the design principles, practical applications, and potential societal impacts of Generative AI (GAI). It shares a similar structure with the "AI for Everyone" course, highlighting the differences between GAI and traditional AI, as well as between general-purpose AI and Artificial General Intelligence (AGI). Week 1: What is Generative AI? Generative AI (GAI) is a type of AI that predicts the next step based on existing content and uses the results as a reference for future iterations. In the realm of imagery, it involves identifying and clarifying images from noise. A key characteristic of GAI is that smaller models have limited growth potential and don't improve significantly with more data, whereas larger models have greater potential for development. In terms of accuracy, GAI is not as effective as Web Search. GAI is better suited for simple interactions, summarizing long arti

Reflections on the course 'AI for Everyone'

 Course:  AI for Everyone The course called "AI For Everyone," discussing the basic concepts of AI, its potential applications, and societal impact. Week 1 focuses on AI concepts, covering data interpretation, machine learning, corporate adoption of AI, limitations of AI, and examples of neural networks and deep learning. Week 2 delves into AI projects, exploring developments in data science, machine learning, and neural networks. It emphasizes the complexity in decision-making capabilities of neural networks and the limitations of AI. The course also discusses the challenges for companies interested in developing AI strategies, emphasizing the need for expert evaluation of AI potential in projects, starting from material collection to creating feasible prototypes and the support required for such ventures. Only then should the formation of an AI team be considered. The translation of your text is as follows: Week 3: AI and Industry This week focuses on how industries can tra