You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

735 lines
14 KiB

4 years ago
2 years ago
4 years ago
2 years ago
4 years ago
2 years ago
4 years ago
4 years ago
4 years ago
2 years ago
4 years ago
4 years ago
4 years ago
2 years ago
4 years ago
4 years ago
4 years ago
2 years ago
4 years ago
4 years ago
2 years ago
4 years ago
4 years ago
2 years ago
2 years ago
2 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
2 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
2 years ago
4 years ago
2 years ago
4 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
3 years ago
2 years ago
3 years ago
2 years ago
2 years ago
3 years ago
2 years ago
3 years ago
3 years ago
2 years ago
3 years ago
2 years ago
3 years ago
2 years ago
3 years ago
2 years ago
3 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
3 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
  1. # til
  2. ## javascript
  3. Write code like it's synchrone
  4. ```javascript
  5. let get = async URL => {
  6. const retval = await fetch(URL);
  7. if (retval.ok) {
  8. this.playlists = await retval.json();
  9. } else {
  10. console.error("doh! network error");
  11. }
  12. };
  13. get();
  14. ```
  15. Or use promise chaining
  16. ```javascript
  17. fetch(URL)
  18. .then(stream => stream.json())
  19. .then(data => (this.playlists = data))
  20. .catch(error => console.error(error));
  21. ```
  22. ## bash
  23. auto-reply `y` on installations or `fsck` repairs
  24. yes | pacman -S <something>
  25. convert all svg files from current path into pngs
  26. find . -name *.svg -exec mogrify -format png {} +
  27. network diff
  28. FILE=/tmp/a
  29. diff -ru <(sort -u "$FILE") <(ssh user@host "sort -u $FILE")
  30. ## ffmpeg
  31. extract audio-only from video file with ID3 tags
  32. ffmpeg -i <input video> -metadata title="Title" -metadata artist="Artist" -ab 256k file.mp3
  33. record screen
  34. ffmpeg -f x11grab -s 1366x768 -i :0.0 -r 25 -threads 2 -c:v libx264 -crf 0 -preset ultrafast output.mkv
  35. ## html
  36. ```html
  37. <style type="text/css">
  38. table {
  39. page-break-inside: auto;
  40. }
  41. tr {
  42. page-break-inside: avoid;
  43. page-break-after: auto;
  44. }
  45. thead {
  46. display: table-header-group;
  47. }
  48. tfoot {
  49. display: table-footer-group;
  50. }
  51. </style>
  52. ```
  53. ## iptables
  54. drop all but accept from one ip
  55. iptables -A INPUT -p tcp --dport 8000 -s 1.2.3.4 -j ACCEPT
  56. iptables -A INPUT -p tcp --dport 8000 -j DROP
  57. drop all incomming ssh connections
  58. iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j DROP
  59. ## docker
  60. Delete all containers
  61. - `docker rm $(docker ps -a -q)`
  62. Delete all images
  63. - `docker rmi $(docker images -q)`
  64. Delete all dangling images
  65. - `docker rmi $(docker images -f dangling=true -q)`
  66. Create docker network
  67. - `docker network create mynet`
  68. clean up
  69. - `docker system prune`
  70. - `docker volume rm $(docker volume ls -q --filter dangling=true)`
  71. run two docker `img` container with different names `foo` and `bar` and the can reach each other with domain name `foo` and `bar`
  72. - `docker run --name foo --net mynet img`
  73. - `docker run --name bar --net mynet img`
  74. copy files from an image to `/tmp/some.file`
  75. docker cp $(docker create my/image:latest):/etc/some.file /tmp/some.file
  76. ### curl docker sock api
  77. ```
  78. curl -s --unix-socket /var/run/docker.sock http://localhost/services|jq '.[0]'
  79. curl --unix-socket /var/run/docker.sock http://localhost/nodes|jq
  80. curl --unix-socket /var/run/docker.sock http://localhost/containers/json
  81. docker service ls --format '{{json .}}'| jq '.'
  82. ```
  83. ## git
  84. Set git to use the credential memory cache
  85. git config --global credential.helper cache
  86. Set the cache to timeout after 1 hour (setting is in seconds)
  87. git config --global credential.helper 'cache --timeout=3600'
  88. Set default editor
  89. git config --global core.editor "vim"
  90. create a patch from a modified file
  91. git diff <modified file> > this.patch
  92. apply a diff patch
  93. git apply this.patch
  94. checkout a pull request
  95. add `fetch = +refs/pull/*/head:refs/remotes/origin/pr/*` to `.git/config` under the `[remote "origin"]` section.
  96. do `git fetch origin`. No you can `git checkout pr/999`.
  97. # youtube-dl
  98. search and download first match
  99. youtube-dl ytsearch:damniam
  100. set auto id3 tags
  101. youtube-dl --prefer-ffmpeg --embed-thumbnail --add-metadata --metadata-from-title "%(artist)s - %(title)s" --audio-quality 0 --audio-format mp3 --extract-audio https://www.youtube.com/watch?v=mvK_5nNPKr8
  102. # retropie
  103. for rom converting
  104. community/ecm-tools 1.03-1 [Installiert]
  105. Error Code Modeler
  106. `ecm2bin rom.img.ecm`
  107. # ufw
  108. list all rules
  109. - `ufw status`
  110. disable/enable firewall
  111. - `ufw enable`
  112. - `ufw disable`
  113. # systemd
  114. `systemctl list-unit-files`
  115. # vs code
  116. pipe into vs-code
  117. ps fax | grep code | code-oss - # for using open source version of vs code
  118. compare open file to clipboard
  119. workbench.files.action.compareWithClipboard).
  120. # Nodejs
  121. ## sleep()
  122. ```
  123. function sleep(ms) {
  124. return new Promise(resolve => setTimeout(resolve, ms));
  125. }
  126. sleep(1000).then(() => console.log("after 1.5 seconds))
  127. async function main() {
  128. await sleep(20000)
  129. console.log("do something after 20 seconds")
  130. }
  131. main()
  132. ```
  133. # nginx
  134. proxy an api service and add cors headers and basic auth
  135. ```
  136. load_module modules/ngx_http_headers_more_filter_module.so;
  137. events {
  138. worker_connections 1024;
  139. }
  140. http {
  141. server {
  142. listen 80;
  143. server_name 127.0.0.1;
  144. location / {
  145. proxy_redirect off;
  146. proxy_set_header Host $host;
  147. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  148. proxy_set_header Authorization "Basic ZW5lcmdpY29zOmVuZXJnaWNvcw==";
  149. set $cors "1";
  150. if ($request_method = 'OPTIONS') {
  151. set $cors "${cors}o";
  152. }
  153. if ($cors = "1") {
  154. more_set_headers 'Access-Control-Allow-Origin: $http_origin';
  155. more_set_headers 'Access-Control-Allow-Credentials: true';
  156. }
  157. if ($cors = "1o") {
  158. more_set_headers 'Access-Control-Allow-Origin: $http_origin';
  159. more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
  160. more_set_headers 'Access-Control-Allow-Credentials: true';
  161. more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';
  162. add_header Content-Length 0;
  163. add_header Content-Type text/plain;
  164. return 204;
  165. }
  166. proxy_pass https://some.url;
  167. }
  168. }
  169. }
  170. ```
  171. A Dockerfile would look like this
  172. ```
  173. FROM alpine:3.7
  174. RUN apk --update --no-cache add nginx nginx-mod-http-headers-more
  175. COPY nginx.conf /etc/nginx/nginx.conf
  176. RUN mkdir /run/nginx
  177. EXPOSE 80
  178. CMD nginx -g 'daemon off;'
  179. ```
  180. nginx dynamic reverse proxy for docker swarm mode
  181. ```
  182. events {
  183. worker_connections 1024;
  184. }
  185. http {
  186. server {
  187. resolver 127.0.0.11;
  188. location ~ /(.*) {
  189. proxy_redirect off;
  190. proxy_set_header Host $host;
  191. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  192. set $upstream $1;
  193. proxy_pass http://$upstream;
  194. }
  195. }
  196. }
  197. ```
  198. # knex
  199. insert on duplicated for MariaDB/MySQL
  200. ```javascript
  201. async function insert_on_duplicated(table, data) {
  202. let insert = knex(table)
  203. .insert(data)
  204. .toString();
  205. let update = knex(table)
  206. .update(data)
  207. .toString()
  208. .replace(/^update .* set /i, "");
  209. return await knex.raw(`${insert} on duplicate key update ${update}`);
  210. }
  211. ```
  212. ## k8s
  213. list secrets
  214. - `kubectl -n kube-system get secret`
  215. https://blog.alexellis.io/kubernetes-in-10-minutes/
  216. ## vue
  217. vuejs without shitty webpack
  218. ```
  219. npm install -D @vue/cli
  220. npx vue init simple web
  221. ```
  222. # do spaces with goofy
  223. place key and sected into `.aws/credentials`
  224. ```
  225. [default]
  226. aws_access_key_id = ...
  227. aws_secret_access_key = ...
  228. ```
  229. and do
  230. ./goofys --endpoint ams3.digitaloceanspaces.com melmac /home/ec2-user/t/
  231. # python
  232. Python json dump with datetime. every time `JSON` doesn't know how to convert a value, it calls
  233. the `default()` function.
  234. ```
  235. def dtconverter(o):
  236. if isinstance(o, datetime.datetime):
  237. return o.__str__()
  238. print(json.dumps(my_py_dict_var, default = dtconverter))
  239. ```
  240. ## ML
  241. > > Neural networks are universal approximators - meaning that for any function F and error E, there exists some neural network (needing only a single hidden layer)
  242. > > that can approximate F with error less than E.
  243. - https://en.wikipedia.org/wiki/Artificial_neural_network#Theoretical_properties
  244. - https://en.wikipedia.org/wiki/Multilayer_perceptron
  245. > > Normalisation is required so that all the inputs are at a comparable range.
  246. With two inputs (x1 and x2), where x1 values are from range 0 to 0.5 and x2 values are from range to 0 to 1000. When x1 is changing by 0.5, the change is 100%, and a
  247. change of x2 by 0.5 is only 0.05%.
  248. ## puppet
  249. - list all nodes
  250. puppet cert list --all
  251. - remove a node
  252. puppet cert clean <node name>
  253. - add / accept a node
  254. puppet cert sign <node name>
  255. # MySQL / MariaDB
  256. dump a live system without blocking
  257. - for MyISAM
  258. `nice -n 19 ionice -c2 -n 7 mysqldump --lock-tables=false <dbname> > dump.sql`
  259. - for InnoDB
  260. `nice -n 19 ionice -c2 -n 7 mysqldump --single-transaction=TRUE <dbname> > dump.sql`
  261. - allow user to create databases with prefix
  262. ```
  263. GRANT ALL PRIVILEGES ON `dev\_%` . * TO 'dev'@'%';`
  264. ```
  265. # NetBSD
  266. ```
  267. PATH="/usr/pkg/sbin:$PATH"
  268. PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/amd64/8.0_current/All/"
  269. export PATH PKG_PATH
  270. pkg_add bash nano
  271. ```
  272. ## NetBSD luactl
  273. ```
  274. modload lua
  275. luactl create mycpu
  276. luactl load mycpu ./cpu.lua
  277. ```
  278. print to `/var/log/messages` using systm module
  279. ```
  280. cat hw.lua
  281. systm.print("hello kernel!\n")
  282. modload luasystm
  283. modstat |grep lua
  284. luactl require helo systm
  285. luactl load helo ./hw.lua
  286. cat /var/log/messages
  287. ...
  288. Oct 9 09:37:29 localhost /netbsd: hello kernel!
  289. ```
  290. # ssh
  291. create pub key from private
  292. ssh-keygen -y -f my.key > my.pub
  293. create pem from public
  294. ssh-keygen -f ~/.ssh/my.pub -e -m PKCS8 > my.pem
  295. encrypt message with pem
  296. echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl
  297. echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl > encrypted_message
  298. echo "some secret" |openssl rsautl -encrypt -pubin -inkey my.pem -ssl -out encrypted_message
  299. decrypt message with private
  300. openssl rsautl -decrypt -inkey ~/.ssh/my.key -in encrypted_message
  301. # nextcloud
  302. call `cron.php` from nextcloud which is running in a docker container
  303. ```
  304. docker exec -u www-data $(docker ps --filter "Name=nextcloud" --format "{{.Names}}") php cron.php
  305. ```
  306. mount with `davfs2`
  307. `sudo mount -t davfs https://nextcloud/remote.php/webdav /mount/point`
  308. ## collabora code integration
  309. deployed also as a service without publishing ports
  310. `docker service create --network public -e 'domain=home\\.osuv\\.de' --name office collabora/code`
  311. and the caddy config
  312. ```
  313. office.osuv.de {
  314. proxy /loleaflet https://office:9980 {
  315. insecure_skip_verify
  316. transparent
  317. websocket
  318. }
  319. proxy /hosting/discovery https://office:9980 {
  320. insecure_skip_verify
  321. transparent
  322. }
  323. proxy /lool https://office:9980 {
  324. insecure_skip_verify
  325. transparent
  326. websocket
  327. }
  328. tls markuman@gmail.com
  329. }
  330. ```
  331. and use `office.osuv.de` as endpoint
  332. # xfs
  333. increase filesystem
  334. `xfs_growfs /mount/point/`
  335. # yubikey
  336. In general you got
  337. _ 2 Slots
  338. _ 32 oath credentials storage
  339. #### oath-hotp 2nd slot for ssh auth
  340. 1. generate secret
  341. `dd if=/dev/random bs=1k count=1 | sha1sum`
  342. 2. flash yubi key slot 2 with generated secret
  343. `ykpersonalize -2 -o oath-hotp -o oath-hotp8 -o append-cr -a <SECRET>`
  344. #### oath totp for aws
  345. - set
  346. `ykman oath add -t aws-username <YOUR_BASE_32_KEY>`
  347. - get
  348. `ykman oath code bergholm -s`
  349. - list
  350. `ykman oath list`
  351. # ansible
  352. run tests in docker container
  353. `bin/ansible-test units -v --python 3.7 --docker default`
  354. run sanity check for all
  355. `./bin/ansible-test sanity --docker default`
  356. run sanity check for one module
  357. `bin/ansible-test sanity lib/ansible/modules/cloud/amazon/cloudwatchlogs_log_group_metric_filter.py --docker default`
  358. run integration test
  359. `./bin/ansible-test integration -v --python 3.7 cloudwatchlogs --docker --allow-unsupported`
  360. # proxysql
  361. change admin credentials
  362. variables with an `admin-` prefix are ADMIN variables, and you should use these commands:
  363. ```
  364. UPDATE global_variables SET variable_value='admin:N3wP4ssw3rd!' WHERE variable_name='admin-admin_credentials';
  365. SAVE MYSQL VARIABLES TO DISK;
  366. LOAD MYSQL VARIABLES TO RUNTIME;
  367. ```
  368. monitor user
  369. ```
  370. CREATE USER 'monitor'@'%' IDENTIFIED BY 'monitorpassword';
  371. GRANT SELECT on sys.* to 'monitor'@'%';
  372. ```
  373. # btrfs
  374. on small devices
  375. `sudo mkfs.btrfs --mixed -f /dev/nvme1n1`
  376. mount with zstd compression
  377. ```yaml
  378. - name: mount with zstd compression
  379. mount:
  380. path: /mnt/
  381. src: /dev/nvme1n1
  382. fstype: btrfs
  383. opts: compress=zstd,discard,nofail,defaults
  384. state: present
  385. ```
  386. grow / resize btrfs partition
  387. `sudo btrfs filesystem resize +3g /mnt/backup`
  388. # MariaDB replica without locking (innodb) and without downtime
  389. 1. create replication user
  390. ```sql
  391. GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.*
  392. TO repl@'%' IDENTIFIED BY 'repl';
  393. ```
  394. 2. dump master
  395. ```
  396. mysqldump --master-data=1 --single-transaction --flush-privileges \
  397. --routines --triggers --all-databases > writer_dump.sql
  398. ```
  399. 3. grep for bin log position
  400. ```
  401. grep "CHANGE MASTER TO MASTER_LOG_FILE" MySQLData.sql |head -n 1
  402. CHANGE MASTER TO MASTER_LOG_FILE='defadda3c269-bin.000001', MASTER_LOG_POS=516401;
  403. ```
  404. 4. apply dump on reader node
  405. ```
  406. mysql < writer_dump.sql`
  407. ```
  408. 5. set replication status on reader node
  409. ```sql
  410. CHANGE MASTER TO
  411. MASTER_HOST='mariadb_writer_host',
  412. MASTER_USER='repl',
  413. MASTER_PASSWORD='repl',
  414. MASTER_PORT=3306,
  415. MASTER_LOG_FILE='defadda3c269-bin.000001',
  416. MASTER_LOG_POS=516401,
  417. MASTER_CONNECT_RETRY=10;
  418. // MASTER_USE_GTID = slave_pos # for GTID
  419. ```
  420. 6. start slave
  421. ```
  422. START SLAVE;
  423. SHOW SLAVE STATUS;
  424. ```
  425. # pip
  426. build/install from local
  427. `pip install . --user`
  428. build for release
  429. `python setup.py sdist`
  430. upload release with twine
  431. `twine upload dist/*`
  432. `$HOME/.pypirc`
  433. ```
  434. [pypi]
  435. username = __token__
  436. password = token-from-pypi
  437. ```
  438. # gitea
  439. Host SSH port is already used. gitea ssh container is published to 222.
  440. SSH Config to use port 22 and hop on localhost 222
  441. ```
  442. Host git.osuv.de
  443. User git
  444. Hostname 127.0.0.1
  445. Port 222
  446. ProxyCommand ssh -q -W %h:%p osuv
  447. ```
  448. # sshuttle
  449. all traffic through sshuttle
  450. `sudo sshuttle --dns -r m@h.osuv.de:22 0/0 -x h.osuv.de`
  451. only subnet through shuttle
  452. `sudo sshuttle -r m@h.osuvd.de 192.168.178.0/24 -x h.osuv.de`
  453. `specify ssh key
  454. `sudo sshuttle --dns -r m@h.osuv.de:22 0/0 -x h.osuv.de --ssh-cmd 'ssh -i /your/key/path.pem'`x
  455. # gnome
  456. set scale below 100%
  457. `gsettings set org.gnome.desktop.interface text-scaling-factor 0.85`
  458. # influxdb
  459. show retentions
  460. ```
  461. SHOW RETENTION POLICIES on fluentbit
  462. ```
  463. create default retention policy
  464. ```
  465. CREATE RETENTION POLICY "15_days_metrics" ON "fluentbit" DURATION 15d REPLICATION 1 default
  466. ```
  467. create database
  468. ```
  469. create database fluentbit
  470. ```
  471. # solo key
  472. ### u2f for sudo
  473. ```
  474. sudo dnf install pamu2fcfg pam-u2f
  475. mkdir ~/.config/Yubico
  476. pamu2fcfg > ~/.config/Yubico/u2f_keys
  477. pamu2fcfg -n >> ~/.config/Yubico/u2f_keys # for a 2nd key
  478. # in /etc/pam.d/sudo add
  479. auth sufficient pam_u2f.so
  480. ```