Two things you can do to improve accuracy of the machine learning model

To improve the accuracy of a machine learning model, you can consider implementing the following two strategies:

  1. Feature Engineering:
    • Feature engineering involves selecting, transforming, or creating new features from the existing dataset to improve the model’s performance.
    • Identify relevant features that have a strong predictive power for the target variable and remove irrelevant or redundant features.
    • Transform features to better represent the underlying relationships in the data, such as scaling numerical features, encoding categorical variables, or creating interaction terms.
    • Extract additional features from the existing ones that might capture important patterns or relationships in the data.
    • Utilize domain knowledge to engineer features that are more informative for the problem at hand.
  2. Hyperparameter Tuning:
    • Hyperparameters are parameters that are set before the learning process begins and control the behavior of the machine learning algorithm.
    • Perform hyperparameter tuning to find the optimal combination of hyperparameters that maximize the model’s performance.
    • Utilize techniques such as grid search, random search, or Bayesian optimization to search the hyperparameter space efficiently.
    • Focus on tuning hyperparameters that have a significant impact on the model’s performance, such as regularization strength, learning rate, tree depth, and ensemble size.
    • Use cross-validation to evaluate different combinations of hyperparameters and select the ones that generalize well to unseen data.

Biba, Star Model, and Bell-LaPadula Security Models

Biba Model:

  • Developed by Kenneth J. Biba in 1977.
  • Focuses on data integrity.
  • Consists of three integrity levels: Top Secret, Secret, and Public.
  • Principle: “No write up, no read down.”
  • Ensures that data integrity is maintained by preventing information from being modified or accessed by users at lower integrity levels.

Star Model (Clark-Wilson Model):

  • Developed by David D. Clark and David R. Wilson in 1987.
  • Focuses on integrity and separation of duties.
  • Key components: Constrained Data Items (CDIs), Transformation Procedures (TPs), and Integrity Verification Procedures (IVPs).
  • Enforces well-formed transactions through the use of these components, ensuring that data is accessed and modified only through authorized mechanisms.

Bell-LaPadula Model:

  • Developed by David Elliott Bell and Leonard J. LaPadula in 1973.
  • Focuses on confidentiality.
  • Introduces the concepts of Simple Security Property (no read up) and *-Security Property (no write down).
  • Enforces the principle of “no read up” and “no write down”, meaning a subject cannot read data at a higher security level (without appropriate clearance) or write data to a lower security level.

Summarize of research paper

DOI: 10.1109/ISCIT.2019.8905238

Authors:

Boonsit YimwadsanaVichhaiy SereySnit Sanghlao

Abstract:

Location-based services (LBS) has emerged as a key computing application service for various industries as computing system is moving closer to real world human physical interaction. In the last ten years, global positioning system and object tracking have taken various industries by storm. Applications such as global positioning, navigation, logistics, object tracking, or even museum guide are developed around the LBS technology. The invention of the Global Positioning System (GPS) allows the method of trilateration to be used in full effect to pinpoint anyone’s location on Earth. However, the same technique cannot be used effectively when an individual is inside a building. A large portion of GPS signals from satellites cannot penetrate through walls and buildings. Various methods in all layers of conventional communication system have been introduced, and indoor positioning using Wi-Fi is leading the competition at the moment.This research aims to analyze the performance of Wi-Fi indoor positioning system in infrastructure mode. According to various studies and fundamental knowledge, the installation positions of the infrastructure access points to target detection areas of objects to be tracked pay the most significant role. In addition, bandwidth, types of Wi-Fi client devices, and number of devices in the area greatly affect the performance of the system. This work shows how these factors affect the positioning error the system. The results allow us to conclude that the current Wi-Fi indoor positioning system in the infrastructure mode can only be used in some applications that may not require extremely accurate positions such as navigation.

Rate this paper:

The paper titled “Performance Analysis of an AoA-based Wi-Fi Indoor Positioning System” by Boonsit Yimwadsana, Vichhaiy Serey, and Snit Sanghlao from Mahidol University, Thailand, presents a comprehensive study on the efficacy of Wi-Fi-based indoor positioning systems using Angle of Arrival (AoA) methodology. The paper is structured around several key components that make for a robust academic contribution:

1. Introduction and Background: The paper begins with a detailed explanation of the relevance and application of Location-based Services (LBS) in various industries. It provides a clear rationale for the study by highlighting the limitations of Global Positioning System (GPS) technologies indoors and the potential of Wi-Fi-based solutions.

2. Literature Review: The authors offer an extensive review of existing technologies and methodologies for indoor positioning, discussing the merits and demerits of infrastructure-based vs. client-based positioning, and various estimation techniques like Time of Arrival (ToA), Time Difference of Arrival (TDoA), and Fingerprinting. This section sets a solid foundation for their research by situating it within the context of existing scholarly work.

3. Methodology: The research methodology is detailed and replicable. The authors describe the experimental setup at Mahidol University, including the configuration of Wi-Fi access points and the parameters tested. The inclusion of input and output variables such as channel bandwidth, test points, device type, and positioning error offers a clear view of the experimental design.

4. Results and Discussion: The results are presented in a structured manner, with findings from various scenarios analyzed for their implications on Wi-Fi indoor positioning system (WIPS) performance. The discussion on the impact of channel bandwidth, device orientation, and the presence of interfering signals from other networks provides valuable insights into the practical challenges of implementing WIPS.

5. Conclusion: The paper concludes by summarizing the key findings, acknowledging the conditions under which WIPS performs optimally, and suggesting areas for future research. The acknowledgment of limitations and the need for further exploration demonstrate a scholarly approach to research.

6. References: The reference section is comprehensive, indicating a thorough engagement with existing literature in the field.

Evaluation

Overall, the paper is well-structured, with a clear flow from introduction through to conclusion. The methodology is robust, allowing for reproducibility of the study, which is a key criterion in scientific research. The analysis of results is thorough, with considerations of various factors that could affect the performance of WIPS. The paper makes a significant contribution to the field by providing a nuanced understanding of the challenges and potential of Wi-Fi-based indoor positioning systems.

The only potential areas for improvement might be in the depth of analysis regarding the algorithmic aspects of AoA estimation and a more detailed discussion on the scalability of the proposed system in different indoor environments. However, these aspects do not significantly detract from the overall quality and relevance of the paper.

In summary, the paper is a valuable addition to the field of indoor positioning systems, offering practical insights and a strong foundation for future research.

Rated score: 8.5/10 by ChatGPT 4.0

Two ways to create list in Python

Here are two ways to create a list containing the numbers 1, 3, and 5 in Python:

1. Using square brackets:

numbers = [1, 3, 5]

This is the most common and straightforward way. Square brackets [] define a list, and you specify the elements within the brackets separated by commas.

2. Using the list constructor:

numbers = list([1, 3, 5])

While less common, you can use the list constructor to explicitly create a list object. Here, list is a built-in function that takes an iterable (like another list) as input and returns a new list containing those elements.

Both methods will create a list named numbers containing the numbers 1, 3, and 5. You can then access or modify the elements in the list using indexing (e.g., numbers[0] for the first element).

GPG for Git commit signing

What is GPG?
GPG can use both symmetric and asymmetric encryption to encrypt, decrypt, and sign messages or data using public and private keys.

below example of using gpg for sign commit

gpg –version

gpg –full-generate-key

gpg –list-signatures
gpg –list-keys

gpg –list-secret-keys –keyid-format=long

gpg –armor –export B28B0B9E2999658A

git add .
git commit -S -m “add signed commit”

git config –global user.signingkey B28B0B9E2999658A

git config –global commit.gpgsign true

now no need to explicitly specify “-S”

git add .
git commit -m “add without s option”

#

In case of a linux distribution using gpg2 you have to set it explicitly:

git config –global gpg.program
git config –global gpg.program gpg2

In case you want to revoke / stop using:
gpg –gen-revoke B28B0B9E2999658A

If you want to delete key
gpg –delete-secret-key

Use for sign doc? https://vichhaiy.wordpress.com/2023/08/25/gpg-commands-to-sign-docs-file/

Jenkins sample telegram notification

pipeline{
agent{ label "" }
parameters {}
environment {}
stages{}

post {
        always{
            node("${MasterNode}") {
                script{
                    withCredentials([string(credentialsId: 'Telegram_Token', variable: 'Telegram_Token'),string(credentialsId: 'ChatID', variable: 'ChatID')]) {
                    telegramWebNotify(
                        "${Telegram_Token}",
                        "${ChatID}",
                        "${telegram_description}",
                        "${currentBuild.currentResult}",
                        "${BRANCH}",
                        "${BUILD_NUMBER}",
                        "UAT",
                        "some user"
                    )}
                }
            }
        }
    }
}

Microsoft Aspire?

What is Aspire?

Microsoft Aspire (software development): This is a set of tools and patterns for building cloud-native applications using the .NET platform. It’s designed to make it easier and faster to develop and deploy these types of applications.

Here’s a concise guideline to containerizing a Microsoft .NET Aspire application:

Prerequisites:

  • .NET 8.0 or later installed.
  • .NET Aspire workload: Install it using either the Visual Studio installer or the command dotnet workload install aspire.
  • Docker Desktop or a similar containerization tool.

Basic Steps:

  1. Create an Aspire application:
    • Use Visual Studio’s template for a .NET Aspire Starter Application.
    • Or, create an empty solution and manually add a project with the Microsoft.NET.Aspire.AppHost type.
  2. Define your app model:
    • In the App Host project, use the DistributedApplication.CreateBuilder method to create an application builder.
    • Add resources (projects, containers, external services) using the appropriate methods on the builder (e.g., AddProjectAddContainerAddRedisContainer).
    • Define dependencies between resources using WithReference.
  3. Build and run locally:
    • Use dotnet build and dotnet run to build and run the application in your development environment.
  4. Containerize:
    • Use the azd init command in the terminal to initialize a deployment environment.
    • Follow the prompts to configure deployment to Azure Container Apps.
  5. Deploy to Azure Container Apps:
    • Use the azd up command to provision and deploy the app to Azure Container Apps.

Additional Points:

  • Azure Container Apps is a recommended target for deploying .NET Aspire apps.
  • Azure CLI (azd) is a helpful tool for managing the deployment process.
  • Authentication: You’ll need to authenticate with Azure to deploy.
  • Pushing containers: Enable the Admin user on your Azure Container Registry if needed.

For more detailed instructions and troubleshooting, refer to the official documentation: https://learn.microsoft.com/en-us/dotnet/aspire/deployment/azure/aca-deployment

https://learn.microsoft.com/en-us/samples/dotnet/aspire-samples/aspire-database-containers/

Understand base64 encoding

Base64 is not encryption — it’s an encoding. It’s a way of representing binary data using only printable (text) characters.

You were told or you thought that the Base64 is an encryption algorithm? Achtung! This is not only erroneous, but also very dangerous, because Base64 is just an encoding algorithm that does not protect sensitive data in any way

Encryption requires a key (string or algorithm) in order to decrypt; hence the “crypt” (root:cryptography)

Encoding modifies/shifts/changes a character code into another. In this case, usual bytes of data can now be easily represented and transported using HTTP.

There are a bunch of free decoder for decoding online Base64 to text or binary that you can use, but there’re also a command ‘base64’ available in Linux or git-bash or simply use standard method base64 in openssl library.

Giving example of base64 use-case:

Create release metadata in a custom script

In this CI/CD example the release preparation is split into separate jobs for greater flexibility:

  • The prepare_job job generates the release metadata. Any image can be used to run the job, including a custom image. The generated metadata is stored in the variable file variables.env. This metadata is passed to the downstream job.
  • The release_job uses the content from the variables file to create a release, using the metadata passed to it in the variables file. This job must use the registry.gitlab.com/gitlab-org/release-cli:latest image because it contains the release CLI.

https://docs.gitlab.com/ee/user/project/releases/release_cicd_examples.html

prepare_job:

stage: prepare # This stage must run before the release stage
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "EXTRA_DESCRIPTION=some message" >> variables.env # Generate the EXTRA_DESCRIPTION and TAG environment variables
- echo "TAG=v$(cat VERSION)" >> variables.env # and append to the variables.env file
artifacts:
reports:
dotenv: variables.env # Use artifacts:reports:dotenv to expose the variables to other jobs

release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
script:
- echo "running release_job for $TAG"
release:
name: 'Release $TAG'
description: 'Created using the release-cli $EXTRA_DESCRIPTION' # $EXTRA_DESCRIPTION and the $TAG
tag_name: '$TAG' # variables must be defined elsewhere
ref: '$CI_COMMIT_SHA' # in the pipeline. For example, in the
milestones: # prepare_job
- 'm1'
- 'm2'
- 'm3'
released_at: '2020-07-15T08:00:00Z' # Optional, is auto generated if not defined, or can use a variable.
assets:
links:
- name: 'asset1'
url: 'https://example.com/assets/1'
- name: 'asset2'
url: 'https://example.com/assets/2'
filepath: '/pretty/url/1' # optional
link_type: 'other' # optional