You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

736 lines
14 KiB

4 years ago
# til
## javascript
Write code like it's synchrone
```javascript
2 years ago
let get = async URL => {
const retval = await fetch(URL);
if (retval.ok) {
this.playlists = await retval.json();
} else {
console.error("doh! network error");
}
};
get();
4 years ago
```
Or use promise chaining
```javascript
fetch(URL)
2 years ago
.then(stream => stream.json())
.then(data => (this.playlists = data))
.catch(error => console.error(error));
4 years ago
```
2 years ago
4 years ago
## bash
auto-reply `y` on installations or `fsck` repairs
yes | pacman -S <something>
convert all svg files from current path into pngs
find . -name *.svg -exec mogrify -format png {} +
3 years ago
network diff
FILE=/tmp/a
diff -ru <(sort -u "$FILE") <(ssh user@host "sort -u $FILE")
4 years ago
## ffmpeg
extract audio-only from video file with ID3 tags
ffmpeg -i <input video> -metadata title="Title" -metadata artist="Artist" -ab 256k file.mp3
record screen
ffmpeg -f x11grab -s 1366x768 -i :0.0 -r 25 -threads 2 -c:v libx264 -crf 0 -preset ultrafast output.mkv
4 years ago
## html
```html
<style type="text/css">
2 years ago
table {
page-break-inside: auto;
}
tr {
page-break-inside: avoid;
page-break-after: auto;
}
thead {
display: table-header-group;
}
tfoot {
display: table-footer-group;
}
4 years ago
</style>
```
## iptables
drop all but accept from one ip
iptables -A INPUT -p tcp --dport 8000 -s 1.2.3.4 -j ACCEPT
iptables -A INPUT -p tcp --dport 8000 -j DROP
drop all incomming ssh connections
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j DROP
4 years ago
## docker
Delete all containers
4 years ago
2 years ago
- `docker rm $(docker ps -a -q)`
4 years ago
4 years ago
Delete all images
4 years ago
2 years ago
- `docker rmi $(docker images -q)`
4 years ago
Delete all dangling images
4 years ago
2 years ago
- `docker rmi $(docker images -f dangling=true -q)`
4 years ago
Create docker network
4 years ago
2 years ago
- `docker network create mynet`
clean up
2 years ago
- `docker system prune`
- `docker volume rm $(docker volume ls -q --filter dangling=true)`
run two docker `img` container with different names `foo` and `bar` and the can reach each other with domain name `foo` and `bar`
2 years ago
- `docker run --name foo --net mynet img`
- `docker run --name bar --net mynet img`
4 years ago
copy files from an image to `/tmp/some.file`
docker cp $(docker create my/image:latest):/etc/some.file /tmp/some.file
### curl docker sock api
```
curl -s --unix-socket /var/run/docker.sock http://localhost/services|jq '.[0]'
curl --unix-socket /var/run/docker.sock http://localhost/nodes|jq
curl --unix-socket /var/run/docker.sock http://localhost/containers/json
docker service ls --format '{{json .}}'| jq '.'
```
4 years ago
## git
Set git to use the credential memory cache
git config --global credential.helper cache
Set the cache to timeout after 1 hour (setting is in seconds)
git config --global credential.helper 'cache --timeout=3600'
4 years ago
Set default editor
4 years ago
git config --global core.editor "vim"
4 years ago
create a patch from a modified file
git diff <modified file> > this.patch
apply a diff patch
git apply this.patch
2 years ago
checkout a pull request
add `fetch = +refs/pull/*/head:refs/remotes/origin/pr/*` to `.git/config` under the `[remote "origin"]` section.
do `git fetch origin`. No you can `git checkout pr/999`.
4 years ago
4 years ago
# youtube-dl
search and download first match
youtube-dl ytsearch:damniam
set auto id3 tags
4 years ago
youtube-dl --prefer-ffmpeg --embed-thumbnail --add-metadata --metadata-from-title "%(artist)s - %(title)s" --audio-quality 0 --audio-format mp3 --extract-audio https://www.youtube.com/watch?v=mvK_5nNPKr8
# retropie
for rom converting
community/ecm-tools 1.03-1 [Installiert]
Error Code Modeler
`ecm2bin rom.img.ecm`
4 years ago
4 years ago
# ufw
4 years ago
4 years ago
list all rules
2 years ago
- `ufw status`
4 years ago
disable/enable firewall
2 years ago
- `ufw enable`
- `ufw disable`
4 years ago
# systemd
`systemctl list-unit-files`
# vs code
pipe into vs-code
2 years ago
ps fax | grep code | code-oss - # for using open source version of vs code
compare open file to clipboard
workbench.files.action.compareWithClipboard).
# Nodejs
## sleep()
```
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
sleep(1000).then(() => console.log("after 1.5 seconds))
async function main() {
await sleep(20000)
console.log("do something after 20 seconds")
}
main()
```
# nginx
proxy an api service and add cors headers and basic auth
```
load_module modules/ngx_http_headers_more_filter_module.so;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
2 years ago
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Authorization "Basic ZW5lcmdpY29zOmVuZXJnaWNvcw==";
set $cors "1";
2 years ago
if ($request_method = 'OPTIONS') {
set $cors "${cors}o";
}
2 years ago
if ($cors = "1") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Credentials: true';
}
2 years ago
if ($cors = "1o") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
more_set_headers 'Access-Control-Allow-Credentials: true';
more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
2 years ago
proxy_pass https://some.url;
}
}
}
```
A Dockerfile would look like this
```
FROM alpine:3.7
RUN apk --update --no-cache add nginx nginx-mod-http-headers-more
COPY nginx.conf /etc/nginx/nginx.conf
RUN mkdir /run/nginx
EXPOSE 80
CMD nginx -g 'daemon off;'
```
nginx dynamic reverse proxy for docker swarm mode
```
events {
worker_connections 1024;
}
http {
server {
resolver 127.0.0.11;
location ~ /(.*) {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
set $upstream $1;
proxy_pass http://$upstream;
}
}
}
```
# knex
insert on duplicated for MariaDB/MySQL
```javascript
2 years ago
async function insert_on_duplicated(table, data) {
let insert = knex(table)
.insert(data)
.toString();
let update = knex(table)
.update(data)
.toString()
.replace(/^update .* set /i, "");
return await knex.raw(`${insert} on duplicate key update ${update}`);
}
4 years ago
```
## k8s
list secrets
2 years ago
- `kubectl -n kube-system get secret`
4 years ago
https://blog.alexellis.io/kubernetes-in-10-minutes/
## vue
vuejs without shitty webpack
```
npm install -D @vue/cli
npx vue init simple web
```
# do spaces with goofy
place key and sected into `.aws/credentials`
```
[default]
aws_access_key_id = ...
aws_secret_access_key = ...
```
2 years ago
and do
2 years ago
./goofys --endpoint ams3.digitaloceanspaces.com melmac /home/ec2-user/t/
3 years ago
# python
2 years ago
Python json dump with datetime. every time `JSON` doesn't know how to convert a value, it calls
3 years ago
the `default()` function.
```
def dtconverter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
print(json.dumps(my_py_dict_var, default = dtconverter))
```
3 years ago
## ML
2 years ago
> > Neural networks are universal approximators - meaning that for any function F and error E, there exists some neural network (needing only a single hidden layer)
> > that can approximate F with error less than E.
3 years ago
2 years ago
- https://en.wikipedia.org/wiki/Artificial_neural_network#Theoretical_properties
- https://en.wikipedia.org/wiki/Multilayer_perceptron
3 years ago
2 years ago
> > Normalisation is required so that all the inputs are at a comparable range.
3 years ago
2 years ago
With two inputs (x1 and x2), where x1 values are from range 0 to 0.5 and x2 values are from range to 0 to 1000. When x1 is changing by 0.5, the change is 100%, and a
3 years ago
change of x2 by 0.5 is only 0.05%.
## puppet
2 years ago
- list all nodes
2 years ago
puppet cert list --all
2 years ago
- remove a node
2 years ago
puppet cert clean <node name>
2 years ago
- add / accept a node
2 years ago
puppet cert sign <node name>
# MySQL / MariaDB
dump a live system without blocking
2 years ago
- for MyISAM
`nice -n 19 ionice -c2 -n 7 mysqldump --lock-tables=false <dbname> > dump.sql`
2 years ago
- for InnoDB
`nice -n 19 ionice -c2 -n 7 mysqldump --single-transaction=TRUE <dbname> > dump.sql`
2 years ago
- allow user to create databases with prefix
3 years ago
```
GRANT ALL PRIVILEGES ON `dev\_%` . * TO 'dev'@'%';`
```
3 years ago
# NetBSD
```
PATH="/usr/pkg/sbin:$PATH"
PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/amd64/8.0_current/All/"
export PATH PKG_PATH
pkg_add bash nano
```
## NetBSD luactl
```
modload lua
luactl create mycpu
2 years ago
luactl load mycpu ./cpu.lua
```
print to `/var/log/messages` using systm module
```
cat hw.lua
systm.print("hello kernel!\n")
modload luasystm
modstat |grep lua
luactl require helo systm
luactl load helo ./hw.lua
cat /var/log/messages
...
Oct 9 09:37:29 localhost /netbsd: hello kernel!
```
# ssh
create pub key from private
ssh-keygen -y -f my.key > my.pub
create pem from public
ssh-keygen -f ~/.ssh/my.pub -e -m PKCS8 > my.pem
encrypt message with pem
echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl
echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl > encrypted_message
echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl -out encrypted_message
decrypt message with private
openssl rsautl -decrypt -inkey ~/.ssh/my.key -in encrypted_message
# nextcloud
call `cron.php` from nextcloud which is running in a docker container
```
docker exec -u www-data $(docker ps --filter "Name=nextcloud" --format "{{.Names}}") php cron.php
```
mount with `davfs2`
`sudo mount -t davfs https://nextcloud/remote.php/webdav /mount/point`
## collabora code integration
deployed also as a service without publishing ports
2 years ago
`docker service create --network public -e 'domain=home\\.osuv\\.de' --name office collabora/code`
and the caddy config
```
office.osuv.de {
proxy /loleaflet https://office:9980 {
insecure_skip_verify
transparent
websocket
}
proxy /hosting/discovery https://office:9980 {
insecure_skip_verify
transparent
}
proxy /lool https://office:9980 {
insecure_skip_verify
transparent
websocket
}
tls markuman@gmail.com
}
```
and use `office.osuv.de` as endpoint
# xfs
increase filesystem
`xfs_growfs /mount/point/`
# yubikey
In general you got
2 years ago
_ 2 Slots
_ 32 oath credentials storage
#### oath-hotp 2nd slot for ssh auth
1. generate secret
2 years ago
`dd if=/dev/random bs=1k count=1 | sha1sum`
2. flash yubi key slot 2 with generated secret
2 years ago
`ykpersonalize -2 -o oath-hotp -o oath-hotp8 -o append-cr -a <SECRET>`
2 years ago
#### oath totp for aws
2 years ago
- set
`ykman oath add -t aws-username <YOUR_BASE_32_KEY>`
- get
`ykman oath code bergholm -s`
- list
`ykman oath list`
# ansible
2 years ago
run tests in docker container
`bin/ansible-test units -v --python 3.7 --docker default`
3 years ago
run sanity check for all
`./bin/ansible-test sanity --docker default`
run sanity check for one module
`bin/ansible-test sanity lib/ansible/modules/cloud/amazon/cloudwatchlogs_log_group_metric_filter.py --docker default`
run integration test
`./bin/ansible-test integration -v --python 3.7 cloudwatchlogs --docker --allow-unsupported`
# proxysql
3 years ago
change admin credentials
variables with an `admin-` prefix are ADMIN variables, and you should use these commands:
```
UPDATE global_variables SET variable_value='admin:N3wP4ssw3rd!' WHERE variable_name='admin-admin_credentials';
SAVE MYSQL VARIABLES TO DISK;
LOAD MYSQL VARIABLES TO RUNTIME;
2 years ago
```
monitor user
```
CREATE USER 'monitor'@'%' IDENTIFIED BY 'monitorpassword';
GRANT SELECT on sys.* to 'monitor'@'%';
```
2 years ago
# btrfs
on small devices
`sudo mkfs.btrfs --mixed -f /dev/nvme1n1`
mount with zstd compression
```yaml
- name: mount with zstd compression
mount:
path: /mnt/
src: /dev/nvme1n1
fstype: btrfs
opts: compress=zstd,discard,nofail,defaults
state: present
```
2 years ago
grow / resize btrfs partition
`sudo btrfs filesystem resize +3g /mnt/backup`
# MariaDB replica without locking (innodb) and without downtime
1. create replication user
```sql
GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.*
TO repl@'%' IDENTIFIED BY 'repl';
```
2. dump master
```
mysqldump --master-data=1 --single-transaction --flush-privileges \
--routines --triggers --all-databases > writer_dump.sql
```
3. grep for bin log position
```
grep "CHANGE MASTER TO MASTER_LOG_FILE" MySQLData.sql |head -n 1
CHANGE MASTER TO MASTER_LOG_FILE='defadda3c269-bin.000001', MASTER_LOG_POS=516401;
```
4. apply dump on reader node
```
mysql < writer_dump.sql`
```
5. set replication status on reader node
```sql
CHANGE MASTER TO
MASTER_HOST='mariadb_writer_host',
MASTER_USER='repl',
MASTER_PASSWORD='repl',
MASTER_PORT=3306,
MASTER_LOG_FILE='defadda3c269-bin.000001',
MASTER_LOG_POS=516401,
MASTER_CONNECT_RETRY=10;
// MASTER_USE_GTID = slave_pos # for GTID
```
6. start slave
```
START SLAVE;
SHOW SLAVE STATUS;
2 years ago
```
# pip
build/install from local
`pip install . --user`
build for release
`python setup.py sdist`
upload release with twine
`twine upload dist/*`
`$HOME/.pypirc`
```
[pypi]
username = __token__
password = token-from-pypi
```
# gitea
Host SSH port is already used. gitea ssh container is published to 222.
SSH Config to use port 22 and hop on localhost 222
```
Host git.osuv.de
User git
Hostname 127.0.0.1
Port 222
ProxyCommand ssh -q -W %h:%p osuv
```
2 years ago
# sshuttle
all traffic through sshuttle
`sudo sshuttle --dns -r m@h.osuv.de:22 0/0 -x h.osuv.de`
only subnet through shuttle
`sudo sshuttle -r m@h.osuvd.de 192.168.178.0/24 -x h.osuv.de`
2 years ago
`specify ssh key
2 years ago
`sudo sshuttle --dns -r m@h.osuv.de:22 0/0 -x h.osuv.de --ssh-cmd 'ssh -i /your/key/path.pem'`x
# gnome
set scale below 100%
`gsettings set org.gnome.desktop.interface text-scaling-factor 0.85`
# influxdb
show retentions
```
SHOW RETENTION POLICIES on fluentbit
```
create default retention policy
```
CREATE RETENTION POLICY "15_days_metrics" ON "fluentbit" DURATION 15d REPLICATION 1 default
```
create database
```
create database fluentbit
```
# solo key
### u2f for sudo
```
sudo dnf install pamu2fcfg pam-u2f
mkdir ~/.config/Yubico
pamu2fcfg > ~/.config/Yubico/u2f_keys
pamu2fcfg -n >> ~/.config/Yubico/u2f_keys # for a 2nd key
# in /etc/pam.d/sudo add
auth sufficient pam_u2f.so
```