首页
壁纸
留言板
友链
更多
统计归档
Search
1
TensorBoard:训练日志及网络结构可视化工具
12,591 阅读
2
主板开机跳线接线图【F_PANEL接线图】
7,203 阅读
3
Linux使用V2Ray 原生客户端
6,331 阅读
4
移动光猫获取超级密码&开启公网ipv6
5,056 阅读
5
NVIDIA 显卡限制功率
3,162 阅读
好物分享
实用教程
linux使用
wincmd
学习笔记
mysql
java学习
nginx
综合面试题
大数据
网络知识
linux
放码过来
python
javascript
java
opencv
蓝桥杯
leetcode
深度学习
开源模型
相关知识
数据集和工具
模型轻量化
语音识别
计算机视觉
杂七杂八
硬件科普
主机安全
嵌入式设备
其它
bug处理
登录
/
注册
Search
标签搜索
好物分享
学习笔记
linux
MySQL
nvidia
typero
内网穿透
webdav
vps
java
cudann
gcc
cuda
树莓派
CNN
图像去雾
ssh安全
nps
暗通道先验
阿里云
jupiter
累计撰写
354
篇文章
累计收到
72
条评论
首页
栏目
好物分享
实用教程
linux使用
wincmd
学习笔记
mysql
java学习
nginx
综合面试题
大数据
网络知识
linux
放码过来
python
javascript
java
opencv
蓝桥杯
leetcode
深度学习
开源模型
相关知识
数据集和工具
模型轻量化
语音识别
计算机视觉
杂七杂八
硬件科普
主机安全
嵌入式设备
其它
bug处理
页面
壁纸
留言板
友链
统计归档
搜索到
14
篇与
的结果
2022-06-16
NCNN部署yolov5s
1.NCNN编译安装参考:Linux下如何安装ncnn2.模型转换(pt->onnx->ncnn)$\color{red}{此路不通,转出来的param文件中的Reshape的参数是错的}$2.1 pt模型转换onnx# pt-->onnx python export.py --weights yolov5s.pt --img 640 --batch 1#安装onnx-simplifier pip install onnx-simplifier # onnxsim 精简模型 python -m onnxsim yolov5s.onnx yolov5s-sim.onnx Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Add │ 10 │ 10 │ │ Concat │ 17 │ 17 │ │ Constant │ 20 │ 0 │ │ Conv │ 60 │ 60 │ │ MaxPool │ 3 │ 3 │ │ Mul │ 69 │ 69 │ │ Pow │ 3 │ 3 │ │ Reshape │ 6 │ 6 │ │ Resize │ 2 │ 2 │ │ Sigmoid │ 60 │ 60 │ │ Split │ 3 │ 3 │ │ Transpose │ 3 │ 3 │ │ Model Size │ 28.0MiB │ 28.0MiB │ └────────────┴────────────────┴──────────────────┘2.2 使用onnx2ncnn.exe 转换模型把你的ncnn/build/tools/onnx加入到环境变量onnx2ncnn yolov5s-sim.onnx yolov5s_6.0.param yolov5s_6.0.bin2.3 调用测试将yolov5s_6.0.param 、yolov5s_6.0.bin模型copy到ncnn/build/examples/位置,运行下面命令./yolov5 image-path就会出现Segmentation fault (core dumped)的报错3.模型转换(pt->torchscript->ncnn)3.1 pt模型转换torchscript# pt-->torchscript python export.py --weights yolov5s.pt --include torchscript --train3.2 下载编译好的 pnnx 工具包执行转换pnnx下载地址:https://github.com/pnnx/pnnx执行转换,获得 yolov5s.ncnn.param 和 yolov5s.ncnn.bin 模型文件,指定 inputshape 并且额外指定 inputshape2 转换成支持动态 shape 输入的模型 ./pnnx yolov5s.torchscript inputshape=[1,3,640,640] inputshape2=[1,3,320,320]3.3 调用测试直接测试的相关文件下载:yolov5_pnnx.zip将 yolov5s.ncnn.param 和 yolov5s.ncnn.bin 模型copy到ncnn/build/examples/位置,运行下面命令./yolov5_pnnx image-path参考资料yolov5 模型部署NCNN(详细过程)Linux&Jetson Nano下编译安装ncnnYOLOv5转NCNN过程Jetson Nano 移植ncnn详细记录u版YOLOv5目标检测ncnn实现(第二版)
2022年06月16日
897 阅读
0 评论
0 点赞
2022-06-16
Jetson nano开启VNC
1.nano设置VNC服务1.执行更新sudo apt-get update2.安装vino服务端这个vino服务端我使用的镜像文件是安装好了的,但是古早版的镜像文件可能没有,所以可以执行下代码看看是否有安装。sudo apt-get install vino3.开启VNC 服务sudo ln -s ../vino-server.service /usr/lib/systemd/user/graphical-session.target.wants4.配置VNC服务gsettings set org.gnome.Vino prompt-enabled false gsettings set org.gnome.Vino require-encryption false5.编辑org.gnome用于恢复丢失的“enabled”参数,用于vnc允许使用RFB 协议进行远程控制输入以下命令进入文件,将下方key内容添加到文件的最后面。保存并退出。sudo vim /usr/share/glib-2.0/schemas/org.gnome.Vino.gschema.xml添加的文件内容如下<key name='enable' type='b'> <summary>Enable remote access to the desktop</summary> <description> If true, allows remote access to the desktop via the RFB protocol. Users on remote machines may then connect to the desktop using a VNC viewer. </description> <default>false</default> </key>6.设置为Gnome编译模式,编译以上的文件sudo glib-compile-schemas /usr/share/glib-2.0/schemas7. 在会话启动时添加程序:Vino-server,使用以下命令行:/usr/lib/vino/vino-server8.连接测试2.设置开机自启动1.允许vino服务gsettings set org.gnome.Vino enabled true2.创建VNC自动启动文件创建文件夹,然后创建一个自动启动文件mkdir -p ~/.config/autostart sudo vim ~/.config/autostart/vino-server.desktop3.添加以下内容到vino-server.desktop文件中[Desktop Entry] Type=Application Name=Vino VNC server Exec=/usr/lib/vino/vino-server NoDisplay=true提示:需要进入界面后才自动启动,建议取消登录密码进入界面。参考资料Jetson nano 通过 vnc 实现远程桌面控制(已在nano实现)
2022年06月16日
690 阅读
0 评论
0 点赞
2022-05-16
Jetson 系列开发板(NX/AGX /Nano)搭建pytorch-gpu环境
提醒:Jetson Xavier NX 用不了 nvidia-smi 命令0.查看JetPack版本信息sudo apt-cache show nvidia-jetpackPackage: nvidia-jetpack Version: 4.6-b199 Architecture: arm64 Maintainer: NVIDIA Corporation Installed-Size: 194后面选择安装版本的时候需要根据JetPack版本信息选择版本1.安装miniconda下载地址:https://docs.conda.io/en/latest/miniconda.htmlwget https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh bash Archiconda3-0.2.3-Linux-aarch64.sh然后创建自己的虚拟环境即可conda create -n base-jupiter python=3.62.安装pytorch-gpu官网的下载地址并不包含aarch64适用的pytorch-gpu以LTS (1.8.2)为例,官下载地址为:https://download.pytorch.org/whl/lts/1.8/torch_lts.html打开可以验证发现并并不包含aarch64适用的pytorch-gpu因此安装gpu版本的需要从NVIDIA官方进行下载,下载地址为:https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048wget https://nvidia.box.com/shared/static/fjtbno0vpo676a25cgvuqc1wty0fkkg6.whl -O torch-1.10.0-cp36-cp36m-linux_aarch64.whl pip install numpy Cpython pip install torch-1.10.0-cp36-cp36m-linux_aarch64.whl遇到问题及解决办法ImportError: libopenblas.so.0: cannot open shared object file: No such file or directorysudo apt-get install libopenblas-devOSError: libmpi_cxx.so.20: cannot open shared object file: No such file or directorysudo apt-get install libopenmpi-dev import torch 出现 Illegal instruction (core dumped)vim ~/.bashrc# 把以下内容加入到末尾扩充环境变量 export OPENBLAS_CORETYPE=ARMV8source ~/.bashrc3.安装torchvision下载地址:https://github.com/pytorch/vision版本对应关系torchtorchvisionpythonmain / nightlymain / nightly>=3.7, <=3.101.11.00.12.0>=3.7, <=3.101.10.20.11.3>=3.6, <=3.91.10.10.11.2>=3.6, <=3.91.10.00.11.1>=3.6, <=3.91.9.10.10.1>=3.6, <=3.91.9.00.10.0>=3.6, <=3.91.8.20.9.2>=3.6, <=3.9git clone -b v0.11.1 https://github.com/pytorch/vision.git vision-0.11.1 cd vision-0.11.1 export BUILD_VERSION=0.11.1 python setup.py install4.效果测试(base-jupiter) nvidia@nx:~$ python Python 3.6.15 | packaged by conda-forge | (default, Dec 3 2021, 19:12:04) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.cuda.is_available() True参考资料查看Jetson系列产品JetPack的版本信息NVIDIA Jetson Xavier NX搭建pytorch gpu环境(超详细)NVIDIA JETSONTX2 安装 pytorch 出现错误:import torch 出现 Illegal instruction(core dumped)ImportError: libopenblas.so.0: cannot open shared object file: No such file or directoryJetson AGX Xavier安装Archiconda虚拟环境管理器与在虚拟环境中调用opencvJetson AGX Xavier安装torch、torchvision且成功运行yolov5算法https://github.com/pytorch/vision
2022年05月16日
2,111 阅读
4 评论
0 点赞
2022-03-13
树莓派4b安装miniconda armv8 64位
1.使用uname -a查看你操作系统的版本pi@raspberrypi:~ $ uname -a Linux raspberrypi 5.10.52-v8+ #1441 SMP PREEMPT Tue Aug 3 18:14:03 BST 2021 aarch64 GNU/Linux2.下载安装包然后去这个清华镜像网站选择对应你操作系统的版本下载:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/因为我是64位的,所以我选择Linux-aarch64版本的下载:wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py37_4.9.2-Linux-aarch64.sh注意:目前大于4.9版本的miniconda不适配树莓派arm64架构操作系统,安装后无法正常使用!建议按照我上面给的miniconda版本安装!3.执行安装并激活环境bash Miniconda3-py37_4.9.2-Linux-aarch64.sh source ~/.bashrc4.验证(base) pi@raspberrypi:/data $ conda -V conda 4.9.2参考资料树莓派4b安装miniconda
2022年03月13日
1,460 阅读
1 评论
0 点赞
2021-12-15
树莓派以及一些常见的硬件设备的浮点计算能力FLOPS
1.The GFLOPS/W of the various machines in the VMW Research Group#NameGFLOPS/WGFLOPSAverage PowerMax PowerIdle PowerarchvendortypeCores/ThreadsRAMOtherLinpack Settings1haswell-ep2.13 GFLOPS/W428 GFLOPS201 W298 W58.7 Wx86_64Intel 6/63/2hsw e5-2640v316 (32)80GB (N=80000, 16 threads, OpenBLAS)5Raspberry Pi-4B (4GB) 64-bit kernel/userspace2.02 GFLOPS/W13.5 GFLOPS6.66W7.30W2.56Waarch64ARMv8Cortex A7244GB (N=20000, 4 threads, OpenBLAS)3haswell/quadro (cpu only)1.68 GFLOPS/W181 GFLOPS107.9 W134.1 W29.3 Wx86_64Intel 6/60/3hsw i7-47904 (8)16GBQuadro K2200(N=40000, 4 threads, OpenBLAS)4broadwell macbookair1.64 GFLOPS/W47.7 GFLOPS29.1 W32.6 W10.0 Wx86_64Intel 6/61/4bdw i5-5250U2 (4)4GBmacbook-air(N=20000, 2 threads, OpenBLAS)5haswell desktop1.56 GFLOPS/W145 GFLOPS92.7 W126.6 W22.3 Wx86_64Intel 6/60/3hsw i7-47704 (8)4GB (N=20000, 4 threads, OpenBLAS)6Raspberry Pi-4B (1GB)1.50 GFLOPS/W9.92 GFLOPS6.6W7.9W2.9WARMARMv7/8Cortex A7241GB (N=9000, 4 threads, OpenBLAS)7haswell desktop (instrumented)1.47 GFLOPS/W115 GFLOPS80.6 W107 W25.9 Wx86_64Intel 6/60/3hsw i5-4570S4 (4)4GB (N=20000, 4 threads, OpenBLAS)8Raspberry Pi-4B (4GB)1.35 GFLOPS/W9.69 GFLOPS7.2W8.2W2.8WARMARMv7/8Cortex A7244GB (N=19000, 4 threads, OpenBLAS)9ivb mac-mini1.21 GFLOPS/W41.2 GFLOPS33.9 W35.8 W11.5 Wx86_64Intel 6/58/9ivb i5-3210M2 (4)4GB (N=20000, 2 threads, OpenBLAS)10jetson-tx11.20 GFLOPS/W16 GFLOPS13.4 W15.3 W2.1 WARMARMv8CortexA5744GBNVIDIA(N=20000, 4 thread, OpenBLAS)11Raspberry Pi-3A+1.19 GFLOPS/W5.00 GFLOPS4.1W7.6W1.4WARMARMv7/8Cortex A534512MB (N=5000, 4 threads, OpenBLAS)12raspberry pi 2B-v1.21.07 GFLOPS/W4.43 GFLOPS4.1W5.1W1.7WARMARMv7/8Cortex A5341GB2B-v1.2(N=8000, 4 threads, OpenBLAS)13ivb macbook-air1.02 GFLOPS/W34.5 GFLOPS34.0 W37.2 W13.8 Wx86_64Intel 6/58/9ivb i5-3427U2 (4)4GB (N=10000, 2 threads, OpenBLAS)14raspberry pi30.813 GFLOPS/W*3.62 GFLOPS4.3W4.8W1.8WARMARMv7/8Cortex A5341GB3 Model B(N=6000*, 4 threads, OpenBLAS)15fam17h-epyc0.795 GFLOPS/W109 GFLOPS137 W151 W67 Wx86_64AMD 23/1/2EPYC 72518 (16)16GB (N=40000, 8 threads, OpenBLAS)16raspberry pi 3B+0.73 GFLOPS/W5.3 GFLOPS7.3W9.4W2.6WARMARMv7/8Cortex A5341GB3B+(N=10000, 4 threads, OpenBLAS)17odroid-xu0.599 GFLOPS/W8.3 GFLOPS13.9 W18.4 W2.7 WARMARMv7Cortex A7/A154 Big 4 Little2GBExynos 5 Octa(N=12000, 4 threads, OpenBLAS)18fam15h-piledriver0.466 GFLOPS/W122 GFLOPS262 W335 W167 Wx86_64AMD 21/2/0Opteron 637616 (32)16GB (N=40000, 16 threads, OpenBLAS)19dragonboard0.450 GFLOPS/W2.10 GFLOPS4.7W5.7W2.4WARMARMv8Cortex-A5341GBSnapdragon 410c(N=8000, 4 threads, OpenBLAS)20haswell/quadro (GPU only)0.436 GFLOPS/W38.4 GFLOPS (Double)88.0 W121.1 W29.2 WNVIDIAQuadro K2200 4GB (N=40000, hpl-cuda)21raspberry pi20.432 GFLOPS/W1.47 GFLOPS3.4 W3.6 W1.8 WARMARMv7Cortex A741GBModel 2(N=10000, 4 threads, OpenBLAS)22fam15h-a100.432 GFLOPS/W54 GFLOPS125.6 W148.6 W28.2 Wx86_64AMD 21/19/1A10-6800B48GB (N=30000, 4 threads, OpenBLAS)23fam16h-a8-jaguar0.354 GFLOPS/W14.1 GFLOPS39.7 W43.6 W22.5 Wx86_64AMD 22/48/1A8-641044GB (N=10000, 4 threads, OpenBLAS)24core20.292 GFLOPS/W18.0 GFLOPS61.7 W67.9 W23.4 Wx86_64Intel 6/23/10Core2 P870024GB (N=15000, 2 threads, OpenBLAS)25chromebook0.277 GFLOPS/W3.0 GFLOPS10.7 W11.1 W5.9 WARMARMv7Cortex A1522GBExynos 5 Dual(N=10000, 2 threads, OpenBLAS)26fam10h-phenom0.277 GFLOPS/W40.3 GFLOPS145.4 W175.0 W69.5 Wx86_64AMD 16/4/3Phenom II X4 95542GB (N=15000, 4 threads, OpenBLAS)27raspberry pi-zero-w0.238 GFLOPS/W0.247 GFLOPS1.0 W1.1 W0.6 WARMARMv6BCM28351512MBModel Zero-W(N=4000, 1 thread, OpenBLAS)28raspberry pi-zero0.236 GFLOPS/W0.319 GFLOPS1.3 W1.4 W0.8 WARMARMv6BCM28351512MBModel Zero(N=5000, 1 thread, OpenBLAS)29raspberry pi-aplus0.223 GFLOPS/W0.218 GFLOPS1.0 W1.0 W0.8 WARMARMv6BCM28351256MBModel A+(N=4000, 1 thread, OpenBLAS)30cubieboard20.194 GFLOPS/W0.861 GFLOPS4.4W4.6W2.2WARMARMv7Cortex A721GBAllwinner A20(N=8000, 2 threads, OpenBLAS)31atom-cedarview desktop0.170 GFLOPS/W3.1 GFLOPS18.2 W18.5 W15.5 Wx86_64Intel 6/54/1Atom D25502 (4)4GB (N=10000, 2 threads, OpenBLAS)32pi-cluster0.166 GFLOPS/W15.5 GFLOPS93.1 W96.8 W71.3 WarmCortex A7Raspberry Pi 29624GB (N=48000, 96 threads, OpenBLAS)33pandaboard-es0.163 GFLOPS/W0.951 GFLOPS5.8 W6.5 W3.0 WARMARMv7Cortex A921GBES, OMAP4(N=4000, 2 threads, OpenBLAS)34atom-cedarview server0.149 GFLOPS/W2.6 GFLOPS22.1 W22.4 W18.6 Wx86_64Intel 6/54/9Atom S12602 (4)4GB (N=20000, 2 threads, OpenBLAS)35raspberry pi-bplus0.118 GFLOPS/W0.213 GFLOPS1.8 W1.9 W1.6 WARMARMv6BCM28351512MBModel B+(N=5000, 1 thread, OpenBLAS)36fam14h-bobcat0.106 GFLOPS/W2.76 GFLOPS26.1 W27.1 W14.8 Wx86_64AMD 20/2/0Bobcat22GBG-T56N(N=8000, 2 threads, OpenBLAS)37raspberry pi compute-module0.103 GFLOPS/W0.217 GFLOPS2.1W2.2W1.9WARMARMv6BCM28351512MBPi Compute Module(N=6000, 1 thread, OpenBLAS)38atom-eeepc0.086 GFLOPS/W1.37 GFLOPS15.9 W16.3 W10.2 Wx86Intel 6/28/2Atom N2701 (2)2GBeeepc 901(N=12000, 2 threads, OpenBLAS)39raspberry pi b0.073 GFLOPS/W0.213 GFLOPS2.9 W3.0 W2.7 WARMARMv6BCM28351512MBModel B(N=5000, 1 thread, OpenBLAS)40Pentium D0.064 GFLOPS/W10.3 GFLOPS160.7 W180.5 W77.2 Wx86_64Intel 15/6/5Pentium 4/D1 (2)1GB (N=8000, 2 threads, OpenBLAS)41beaglebone-black0.026 GFLOPS/W0.068 GFLOPS2.6 W2.8 W1.9 WARMARMv7Cortex A81512MBTI AM3(N=5000, 1 thread, OpenBLAS)42gumstix-overo0.015 GFLOPS/W0.041 GFLOPS2.7 W2.8 W2.0 WARMARMv7Cortex A81256MBTI OMAP3(N=4000, 1 thread, ATLAS)43beagleboard-xm0.014 GFLOPS/W0.054 GFLOPS4.0 W4.3 W3.2 WARMARMv7Cortex A81512MBTI DM3730(N=5000, 1 thread, OpenBLAS)44Pentium II0.005 GFLOPS/W0.238 GFLOPS48.3 W48.7 W31.2 Wx86Intel 6/5/2Pentium II1256MB (N=3000, 1 thread, OpenBLAS)45sparc0.003 GFLOPS/W0.456 GFLOPS140.7W146.8W136.9WSUNSPARCUltra1512MBTI Ultrasparc II(N=5000, 1 thread, OpenBLAS)46appleII6.65E-9 GFLOPS/W1.33E-7 GFLOPS20.1 W20.1 W20.1 WMOS65C02Apple IIe1128kplatinum(N=10, 1 thread, BASIC)?ELF Membership Card???? GFLOPS???ELFRCA1802 132kB???(??, 1 thread)?sandybridge-ep?85 GFLOPS???x86_64Intel 6/45/?snb12 (24)16GB (N=40000 12 threads, ATLAS)?trimslice???? GFLOPS???ARMARMv7Cortex A921GBTegra2????octane???? GFLOPS???MIPSSGIMIPS R12k1??????(??, 1 thread)?avr32???? GFLOPS???AVR32AVRAP70001??????(??, 1 thread)?gumstix-netstix???? GFLOPS???ARMARMv5Intel PXA255164MB???(??, 1 thread)?k6-2+???? GFLOPS???x86AMDK6-2+1?????(??, 1 thread)?486???? GFLOPS???x86Cyrix486120MB???(??, 1 thread)?g3-iBook???? GFLOPS???PPCAppleG31640MB???(??, 1 thread)?g4-powerBook???? GFLOPS???PPCAppleG412 GB???(??, 1 thread)?p4???? GFLOPS???x86IntelPentium 41768MB???(??, 1 thread)?core duo???? GFLOPS???x86IntelCore Duo22 GB???(??, 1 thread)2.The top 50 fastest computers in the Weaver Research Group#NameFLOPSarchvendortypeCores/ThreadsRAMOtherLinpack Settings1haswell-ep436 GFLOPSx86_64Intel 6/63/2hsw e5-2640v316 (32)80GB (N=100000, 16 threads, OpenBLAS)2power8195 GFLOPSppc64elIBM power88348-21c8 (64)32GB (N=40000, 8 threads, OpenBLAS)3broadwell-ep184 GFLOPSx86_64Intel 6/79/1bdw e5-2620v48 (16)32GB (N=50000, 8 threads, OpenBLAS)4haswell/quadro (CPU only)180 GFLOPSx86_64Intel 6/60/3hsw i7-47904 (8)16GBQuadro K2200(N=40000, 4 threads, OpenBLAS)!haswell/quadro (GPU only)38.4 GFLOPS (Double)NVIDIAQuadro K2200 4GB (N=40000, hpl-cuda)5skylake desktop161 GFLOPSx86_64Intel 6/94/3skl i7-67004 (8)8GB (N=30000, 4 threads, OpenBLAS)6haswell desktop145 GFLOPSx86_64Intel 6/60/3hsw i7-47704 (8)4GB (N=20000, 4 threads, OpenBLAS)7fam17h-epyc131 GFLOPSx86_64AMD 23/1/2EPYC 72518 (16)16GB (N=42000, 8 threads, OpenBLAS)8fam15h-piledriver117 GFLOPSx86_64AMD 21/2/0Opteron 637616 (32)16GB (N=40000, 16 threads, OpenBLAS)9haswell desktop (instrumented)115 GFLOPSx86_64Intel 6/60/3hsw i5-4570S4 (4)4GB (N=20000, 4 threads, OpenBLAS)10sandybridge-ep85 GFLOPSx86_64Intel 6/45/?snb12 (24)16GB (N=40000 12 threads, ATLAS)11elitebook77 GFLOPSx86_64AMD 23/24/1Ryzen 7-3700U4 (8)16GBZen+ elitebook laptop(N=20000, 4 threads, OpenBLAS)12broadwell NUC66 GFLOPSx86_64Intel 6/61/4Broadwell i7-5557U2 (4)8GBIntel NUC(N=20000, 2 threads, OpenBLAS)13fam15h-a1054 GFLOPSx86_64AMD 21/19/1A10-6800B48GB (N=30000, 4 threads, OpenBLAS)14broadwell MacBookAir51 GFLOPSx86_64Intel 6/61/4Broadwell i5-5250U2 (4)4GBmacbook-air(N=20000, 2 threads, OpenBLAS)15fam10h-phenom41 GFLOPSx86_64AMD 16/4/3Phenom II X4 95542GB (N=15000, 4 threads, OpenBLAS)16ivb-mac-mini40 GFLOPSx86_64Intel 6/58/9ivb i5-3210M2 (4)4GB (N=20000, 2 threads, OpenBLAS)17ivb-macbook-air36 GFLOPSx86_64Intel 6/58/9ivb i5-3427U2 (4)4GB (N=10000, 2 threads, OpenBLAS)18core218.4 GFLOPSx86_64Intel 6/23/10Core2 P870024GB (N=20000, 2 threads, OpenBLAS)19jetson-tx116.0 GFLOPSARMARMv8CortexA5744GBNVIDIA(N=20000, 4 thread, OpenBLAS)20pi-cluster15.4 GFLOPSarmCortex A7Raspberry Pi 29624GB (N=48000, OpenBLAS)21fam16h-a8-jaguar14.0 GFLOPSx86_64AMD 22/48/1A8-641044GB (N=10000, 4 threads, OpenBLAS)22Raspberry Pi-400 (4GB)13.8 GFLOPSARMARMv8Cortex A7244GB (N=12000, 4 threads, OpenBLAS)23Raspberry Pi-4B (4GB) (64-bit user/kernel)13.5 GFLOPSARM64 / aarch64ARMv8Cortex A7244GB (N=20000, 4 threads, OpenBLAS)24Pentium D11.8 GFLOPSx86_64Intel 15/6/5Pentium 4/D1 (2)1GB (N=10000, 2 threads, OpenBLAS)25Raspberry Pi-4B (1GB)10.3 GFLOPSARMARMv7/v8Cortex A7241GB (N=9000, 4 threads, OpenBLAS)26Raspberry Pi-4B (4GB)9.9 GFLOPSARMARMv7/v8Cortex A7244GB (N=19000, 4 threads, OpenBLAS)27odroid-xu8.3 GFLOPSARMARMv7Cortex A7/A154 Big 4 Little2GBExynos 5 Octa(N=12000, 4 threads, OpenBLAS)28Raspberry pi-3B+5.47 GFLOPSARMARMv7/8Cortex A5341GB3 Model B+(N=6000, 4 threads, OpenBLAS)29chromebook5.44 GFLOPSARMARMv7Cortex A1522GBExynos 5 Dual(N=10000, 2 threads, OpenBLAS)30Raspberry Pi-3A+4.93 GFLOPSARMARMv7/8Cortex A534512MB (N=5000, 4 threads, OpenBLAS)31Raspberry pi-2b-v1.24.39 GFLOPSARMARMv7/8Cortex A5341GB2 Model B v1.2(N=8000, 4 threads, OpenBLAS)32Raspberry pi-3b3.62 GFLOPSARMARMv7/8Cortex A5341GB3 Model B(N=8000*, 4 threads, OpenBLAS)33atom-cedarview server3.35 GFLOPSx86_64Intel 6/54/9Atom S12602 (4)4GB (N=20000, 2 threads, OpenBLAS)34atom-cedarview desktop2.97 GFLOPSx86_64Intel 6/54/1Atom D25502 (4)4GB (N=20000, 4 threads, OpenBLAS)35fam14h-bobcat2.76 GFLOPSx86_64AMD 20/2/0Bobcat22GBG-T56N(N=8000, 2 threads, OpenBLAS)36dragonboard2.22 GFLOPSARMARMv8Cortex-A5341GBSnapdragon 410c(N=8000, 4 threads, OpenBLAS)37Raspberry pi-2b1.47 GFLOPSARMARMv7Cortex A741GBModel 2(N=10000, 4 threads, OpenBLAS)38atom-eeepc1.36 GFLOPSx86Intel 6/28/2Atom N2701 (2)2GBeeepc 901(N=10000, 2 threads, OpenBLAS)39pandaboard-es0.915 GFLOPSARMARMv7Cortex A921GBES, OMAP4(N=5000 *, 2 threads, OpenBLAS)40cubieboard20.859 GFLOPSARMARMv7Cortex A721GBAllwinner A20(N=8000, 2 threads, OpenBLAS)41sparc0.423 GFLOPSSUNSPARCUltra1512MBTI Ultrasparc II(N=5000, 1 thread, OpenBLAS)42Raspberry pi-zero0.319 GFLOPSARMARMv6BCM28351512MBModel Zero(N=5000, 1 thread, OpenBLAS)43Raspberry pi-zero-w0.247 GFLOPSARMARMv6BCM28351512MBModel Zero W(N=4000, 1 thread, OpenBLAS)44Pentium II0.241 GFLOPSx86Intel 6/5/2Pentium II1256MB (N=4000, 1 thread, OpenBLAS)45Raspberry pi-aplus0.223 GFLOPSARMARMv6BCM28351256MBModel A+(N=4000, 1 thread, OpenBLAS)46Raspberry pi compute-module0.217 GFLOPSARMARMv6BCM28351512MBPi Compute Module(N=5000, 1 thread, OpenBLAS)47Raspberry pi-b0.213 GFLOPSARMARMv6BCM28351512MBModel B(N=5000, 1 thread, OpenBLAS)48Raspberry pi-bplus0.213 GFLOPSARMARMv6BCM28351512MBModel B+(N=5000, 1 thread, OpenBLAS)49beaglebone-black0.068 GFLOPSARMARMv7Cortex A81512MBTI AM3(N=5000, 1 thread, OpenBLAS)50beagleboard-xm0.054 GFLOPSARMARMv7Cortex A81512MBTI DM3730(N=5000, 1 thread, OpenBLAS)51gumstix-overo0.041 GFLOPSARMARMv7Cortex A81256MBTI OMAP3(N=4000, 1 thread, OpenBLAS)52appleII133 FLOPSMOS65C02Apple IIe1128kplatinum(N=10, 1 thread, BASIC)?ELF Membership Card??? GFLOPSELFRCA1802 132kB???(??, 1 thread)?trimslice??? GFLOPSARMARMv7Cortex A921GBTegra2????octane??? GFLOPSMIPSSGIMIPS R12k1??????(??, 1 thread)?avr32??? GFLOPSAVR32AVRAP70001??????(??, 1 thread)?gumstix-netstix??? GFLOPSARMARMv5Intel PXA255164MB???(??, 1 thread)?k6-2+??? GFLOPSx86AMDK6-2+1?????(??, 1 thread)?486??? GFLOPSx86Cyrix486120MB???(??, 1 thread)?g3-iBook??? GFLOPSPPCAppleG31640MB???(??, 1 thread)?g4-powerBook??? GFLOPSPPCAppleG412 GB???(??, 1 thread)?p4??? GFLOPSx86IntelPentium 41768MB???(??, 1 thread)?core duo??? GFLOPSx86IntelCore Duo22 GB???(??, 1 thread)参考资料http://web.eece.maine.edu/~vweaver/group/green_machines.htmlhttp://web.eece.maine.edu/~vweaver/group/machines.htmlhttps://github.com/deater/performance_results
2021年12月15日
2,481 阅读
0 评论
0 点赞
2021-10-28
Linux&Jetson Nano下编译安装ncnn
1.下载ncnn源码项目地址:https://github.com/Tencent/ncnngit clone https://github.com/Tencent/ncnn.git cd ncnn git submodule update --init2.安装依赖2.1 通用依赖gitg++cmakeprotocol buffer (protobuf) headers files and protobuf compilerglslangopencv(用于编译案列)sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler libvulkan-dev vulkan-utils libopencv-dev2.2 vulkan header files and loader library (用于调用GPU,只用CPU的可以不用安装)2.2.1 X86版本安装# 为GPU安装Vulkan驱动 sudo apt install mesa-vulkan-drivers # 安装vulkansdk wget https://sdk.lunarg.com/sdk/download/1.2.189.0/linux/vulkansdk-linux-x86_64-1.2.189.0.tar.gz?Human=true -O vulkansdk-linux-x86_64-1.2.189.0.tar.gz tar -xvf vulkansdk-linux-x86_64-1.2.189.0.tar.gz export VULKAN_SDK=$(pwd)/1.2.189.0/x86_642.2.2 Jetson Nano安装确认vulkan驱动是否安装正常nvidia@xavier:/$ vulkaninfo Xlib: extension "NV-GLX" missing on display "localhost:10.0". Xlib: extension "NV-GLX" missing on display "localhost:10.0". Xlib: extension "NV-GLX" missing on display "localhost:10.0". /build/vulkan-tools-WR7ZBj/vulkan-tools-1.1.126.0+dfsg1/vulkaninfo/vulkaninfo.h:399: failed with ERROR_INITIALIZATION_FAILED异常原因查找通过vnc远程连接到图形界面后运行vulkaninfonano@nano:~$ vulkaninfo =========== VULKAN INFO =========== Vulkan Instance Version: 1.2.70 Instance Extensions: ==================== Instance Extensions count = 16 VK_KHR_device_group_creation : extension revision 1 ······ ========================= minImageCount = 2 maxImageCount = 8 currentExtent: width = 256 height = 256 minImageExtent: width = 256 height = 256 maxImageExtent: width = 256 height = 256 maxImageArrayLayers = 1 ······安装vulkansdk# 编译安装vulkansdk sudo apt-get update && sudo apt-get install git build-essential libx11-xcb-dev libxkbcommon-dev libwayland-dev libxrandr-dev cmake git clone https://github.com/KhronosGroup/Vulkan-Loader.git cd Vulkan-Loader && mkdir build && cd build ../scripts/update_deps.py cmake -DCMAKE_BUILD_TYPE=Release -DVULKAN_HEADERS_INSTALL_DIR=$(pwd)/Vulkan-Headers/build/install .. make -j$(nproc) export LD_LIBRARY_PATH=$(pwd)/loader cd Vulkan-Headers ln -s ../loader lib export VULKAN_SDK=$(pwd)3. 开始编译CPU 版# 没安VULKAN运行这个 cd ncnn mkdir -p build cd build cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=OFF -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON .. make -j$(nproc)GPU-X86# 有GPU安了VULKAN运行这个 cd ncnn mkdir -p build cd build cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON .. make -j$(nproc)GPU- Jetson Nano# Jetson Nano用这个 cd ncnn mkdir -p build cd build cmake -DCMAKE_TOOLCHAIN_FILE=../toolchains/jetson.toolchain.cmake -DNCNN_VULKAN=ON -DCMAKE_BUILD_TYPE=Release -DNCNN_BUILD_EXAMPLES=ON .. make -j$(nproc)4.验证安装4.1 验证squeezenetcd ../examples ../build/examples/squeezenet ../images/256-ncnn.pngnano@nano:/software/ncnn/examples$ ../build/examples/squeezenet ../images/256-ncnn.png [0 NVIDIA Tegra X1 (nvgpu)] queueC=0[16] queueG=0[16] queueT=0[16] [0 NVIDIA Tegra X1 (nvgpu)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0 [0 NVIDIA Tegra X1 (nvgpu)] fp16-p/s/a=1/1/1 int8-p/s/a=1/1/1 [0 NVIDIA Tegra X1 (nvgpu)] subgroup=32 basic=1 vote=1 ballot=1 shuffle=1 532 = 0.168945 920 = 0.093323 716 = 0.063110 nvdc: start nvdcEventThread nvdc: exit nvdcEventThread4.1 验证benchncnncd ../benchmark ../build/benchmark/benchncnn 10 $(nproc) 0 0nano@nano:/software/ncnn/benchmark$ ../build/benchmark/benchncnn 10 $(nproc) 0 0[0 NVIDIA Tegra X1 (nvgpu)] queueC=0[16] queueG=0[16] queueT=0[16] [0 NVIDIA Tegra X1 (nvgpu)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0 [0 NVIDIA Tegra X1 (nvgpu)] fp16-p/s/a=1/1/1 int8-p/s/a=1/1/1 [0 NVIDIA Tegra X1 (nvgpu)] subgroup=32 basic=1 vote=1 ballot=1 shuffle=1 loop_count = 10 num_threads = 4 powersave = 0 gpu_device = 0 cooling_down = 1 squeezenet min = 19.90 max = 22.82 avg = 20.82 squeezenet_int8 min = 36.58 max = 236.35 avg = 66.89 mobilenet min = 24.75 max = 41.05 avg = 28.83 mobilenet_int8 min = 42.95 max = 70.39 avg = 52.08 mobilenet_v2 min = 31.84 max = 38.09 avg = 35.59 mobilenet_v3 min = 29.77 max = 38.48 avg = 33.56 shufflenet min = 25.98 max = 36.90 avg = 30.86 shufflenet_v2 min = 18.46 max = 27.65 avg = 20.49 mnasnet min = 22.63 max = 35.37 avg = 24.88 proxylessnasnet min = 27.85 max = 33.44 avg = 30.52 efficientnet_b0 min = 34.85 max = 48.31 avg = 38.46 efficientnetv2_b0 min = 56.62 max = 76.70 avg = 61.99 regnety_400m min = 28.31 max = 35.59 avg = 31.92 blazeface min = 14.40 max = 34.70 avg = 23.63 googlenet min = 55.01 max = 75.36 avg = 60.89 googlenet_int8 min = 111.53 max = 315.94 avg = 167.58 resnet18 min = 51.45 max = 77.21 avg = 59.26 resnet18_int8 min = 81.99 max = 207.09 avg = 117.43 alexnet min = 69.98 max = 102.26 avg = 83.27 vgg16 min = 302.14 max = 337.56 avg = 320.55 vgg16_int8 min = 464.06 max = 601.92 avg = 540.28 resnet50 min = 140.36 max = 176.66 avg = 159.53 resnet50_int8 min = 299.16 max = 554.05 avg = 453.26 squeezenet_ssd min = 53.43 max = 78.75 avg = 63.67 squeezenet_ssd_int8 min = 91.45 max = 215.14 avg = 123.13 mobilenet_ssd min = 66.30 max = 90.77 avg = 76.86 mobilenet_ssd_int8 min = 89.05 max = 261.33 avg = 119.18 mobilenet_yolo min = 142.24 max = 182.72 avg = 154.48 mobilenetv2_yolov3 min = 81.96 max = 107.17 avg = 91.93 yolov4-tiny min = 103.76 max = 138.15 avg = 115.43 nanodet_m min = 27.15 max = 36.88 avg = 32.00 yolo-fastest-1.1 min = 33.21 max = 40.95 avg = 35.84 yolo-fastestv2 min = 17.51 max = 29.54 avg = 21.32 vision_transformer min = 4981.82 max = 5576.98 avg = 5198.79 nvdc: start nvdcEventThread nvdc: exit nvdcEventThread参考资料how to buildVulkan Support on L4TNVIDIA vulkan driver的安装和Jetson平台上vulkan sdk的制作vulkaninfo failed with VK_ERROR_INITIALIZATION_FAILED
2021年10月28日
892 阅读
0 评论
0 点赞
2021-06-29
Jetson nano 安装TensorFlow GPU
Jetson nano 安装TensorFlow GPU1.Prerequisites and DependenciesBefore you install TensorFlow for Jetson, ensure you:Install JetPack on your Jetson device.Install system packages required by TensorFlow:$ sudo apt-get update $ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortranInstall and upgrade pip3.$ sudo apt-get install python3-pip $ sudo pip3 install -U pip testresources setuptools==49.6.0 Install the Python package dependencies.$ sudo pip3 install -U numpy==1.19.4 future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind112.Installing TensorFlowNote: As of the 20.02 TensorFlow release, the package name has changed from tensorflow-gpu to tensorflow. See the section on Upgrading TensorFlow for more information.Install TensorFlow using the pip3 command. This command will install the latest version of TensorFlow compatible with JetPack 4.5.$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v45 tensorflowNote: TensorFlow version 2 was recently released and is not fully backward compatible with TensorFlow 1.x. If you would prefer to use a TensorFlow 1.x package, it can be installed by specifying the TensorFlow version to be less than 2, as in the following command:$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v45 ‘tensorflow<2’If you want to install the latest version of TensorFlow supported by a particular version of JetPack, issue the following command:$ sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v$JP_VERSION tensorflowWhere:JP_VERSIONThe major and minor version of JetPack you are using, such as 42 for JetPack 4.2.2 or 33 for JetPack 3.3.1.If you want to install a specific version of TensorFlow, issue the following command:$ sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v$JP_VERSION tensorflow==$TF_VERSION+nv$NV_VERSIONWhere:JP_VERSIONThe major and minor version of JetPack you are using, such as 42 for JetPack 4.2.2 or 33 for JetPack 3.3.1.TF_VERSIONThe released version of TensorFlow, for example, 1.13.1.NV_VERSIONThe monthly NVIDIA container version of TensorFlow, for example, 19.01.Note: The version of TensorFlow you are trying to install must be supported by the version of JetPack you are using. Also, the package name may be different for older releases. See the TensorFlow For Jetson Platform Release Notes for a list of some recent TensorFlow releases with their corresponding package names, as well as NVIDIA container and JetPack compatibility.For example, to install TensorFlow 1.13.1 as of the 19.03 release, the command would look similar to the following:$ sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3Tensorflow-GPU测试是否可用Tensorflow-gpu 1.x.x, 如Tensorflow-gpu 1.2.0, 可使用以下代码import tensorflow as tf tf.test.is_gpu_available()Tensoeflow-gpu 2.x.x,如Tensorflow-gpu 2.2.0, 可使用以下代码import tensorflow as tf tf.config.list_physical_devices('GPU')参考资料Installing TensorFlow For Jetson Platform:https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.htmlTensorflow-GPU测试是否可用:https://www.jianshu.com/p/8eb7e03a9163
2021年06月29日
679 阅读
0 评论
0 点赞
2020-12-19
在树莓派上安装最新版Nodejs
在树莓派上安装最新版Nodejs树莓派的初始镜像上一般没有安装Nodejs,而我们要用到Nodejs时,就要考虑在树莓派上安装Nodejs了,本文教大家在树莓派上快速安装Nodejs。一、检查是否安装了Nodejs使用 node -v 和npm -v 命令可以快速查看树莓派上是否安装了Nodejs。pi@raspberrypi:~ $ node -v -bash: node: command not found pi@raspberrypi:~ $ npm -v -bash: npm: command not found从树莓派返回的信息“node: command not found”来看,在树莓派上没有安装Nodejs.二、安装方法1、查看系统信息pi@raspberrypi:~ $ file /bin/ls /bin/ls: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=67a394390830ea3ab4e83b5811c66fea9784ee69, stripped pi@raspberrypi:~ $ uname -a Linux raspberrypi 4.19.75-v7+ #1270 SMP Tue Sep 24 18:45:11 BST 2019 armv7l GNU/Linux通过查看可以看到系统是32位的,而且是ARM v7架构,知道这些信息后,我们去Nodejs官网上下载相应的安装包。2.下载 Node.js 预编译安装包我们根据系统版本选择 Linux 二进制文件 (ARM) ARMv7 版本,然后下载到树莓派上。pi@raspberrypi:~ $ wget https://nodejs.org/dist/v12.14.1/node-v12.14 Tue 17 Mar 21:02:31 CST 2020 更新:由于node.js更新,最新的node.js下载命令为:pi@raspberrypi:~ $ wget https://nodejs.org/dist/v12.16.1/node-v12.16.1-linux-armv7l.tar.xz3、安装Nodejs安装包在树莓派上下载好之后,我们解压安装包。pi@raspberrypi:~ $ tar -xvf node-v12.16.1-linux-armv7l.tar.xz解压后,在当前文件夹为生成node-v12.14.1-linux-armv7l文件夹,我们进这个文件夹进行测试。$ cd node-v12.16.1-linux-armv7l/bin #进入node 文件夹 $ ./node -v #查看node版本输出没问题后,我们进行最后一步,配置链接,使得不进入node的这跟文件夹就可以使用node。4.配置Node和npm链接$ sudo ln /home/pi/node-v12.16.1-linux-armv7l/bin/node /usr/local/bin/node #配置node的链接 $ sudo ln -s /home/pi/node-v12.16.1-linux-armv7l/bin/npm /usr/local/bin/npm #配置npm 的软链接配置完成后,就可以使用node -v 和npm -v 来查看Nodejs 是否安装成功了。pi@raspberrypi:~ $ node -v v12.14.1 pi@raspberrypi:~ $ npm -v 6.13.4
2020年12月19日
817 阅读
0 评论
0 点赞
2020-12-19
树莓派使用Ngrok进行内网穿透
树莓派使用Ngrok进行内网穿透一、注册Sunny-Ngrok,并开通隧道在Sunny-Ngrok上注册一个账号,然后进入后台,开通一个隧道。二、下载客户端,并启动隧1.在树莓派上下载Ngrok客户端下载地址:http://hls.ctopus.com/sunny/linux_arm.zip?v=2下载完成之后将客户端执行文件移动到 /use/local/bin 目录下并给予可执行权限。sudo mv sunny /usr/local/bin/sunny sudo chmod +x /usr/local/bin/sunny2、编写启动脚本sudo nano /etc/init.d/sunny/etc/init.d/sunny 启动脚本代码#!/bin/sh -e ### BEGIN INIT INFO # Provides: ngrok.cc # Required-Start: $network $remote_fs $local_fs # Required-Stop: $network $remote_fs $local_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: autostartup of ngrok for Linux ### END INIT INFO NAME=sunny DAEMON=/usr/local/bin/$NAME PIDFILE=/var/run/$NAME.pid [ -x "$DAEMON" ] || exit 0 case "$1" in start) if [ -f $PIDFILE ]; then echo "$NAME already running..." echo -e "\033[1;35mStart Fail\033[0m" else echo "Starting $NAME..." start-stop-daemon -S -p $PIDFILE -m -b -o -q -x $DAEMON -- clientid 隧道id || return 2 echo -e "\033[1;32mStart Success\033[0m" fi ;; stop) echo "Stoping $NAME..." start-stop-daemon -K -p $PIDFILE -s TERM -o -q || return 2 rm -rf $PIDFILE echo -e "\033[1;32mStop Success\033[0m" ;; restart) $0 stop && sleep 2 && $0 start ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 ;; esac exit 0⚠️注意:把代码里面的【隧道id】替换成自己的隧道id3、测试可执行代码sudo chmod 755 /etc/init.d/sunny sudo /etc/init.d/sunny start sudo /etc/init.d/sunny start #启动 sudo /etc/init.d/sunny stop #停止 sudo /etc/init.d/sunny restart #重启4、设置开机启动cd /etc/init.d sudo update-rc.d sunny defaults 90 #加入开机启动 sudo update-rc.d -f sunny remove #取消开机启动四、完成启动Ngrok 隧道,可以看到服务器已经上线了。
2020年12月19日
855 阅读
0 评论
0 点赞
2020-12-18
树莓派使用Aria2搭建BT远程下载机
树莓派使用Aria2搭建BT远程下载机一、安装Aria2sudo apt-get update sudo apt-get install aria2二、Aria2配置2.1创建配置文件mkdir -p ~/.config/aria2/ touch ~/.config/aria2/aria2.session nano ~/.config/aria2/aria2.config2.2添加如下配置信息# set your own path dir=[yourpath] disk-cache=32M file-allocation=trunc continue=true max-concurrent-downloads=10 max-connection-per-server=16 min-split-size=10M split=5 max-overall-download-limit=0 #max-download-limit=0 #max-overall-upload-limit=0 #max-upload-limit=0 disable-ipv6=false save-session=~/.config/aria2/aria2.session input-file=~/.config/aria2/aria2.session save-session-interval=60 enable-rpc=true rpc-allow-origin-all=true rpc-listen-all=true rpc-secret=secret #event-poll=select rpc-listen-port=6800 # for PT user please set to false enable-dht=true enable-dht6=true enable-peer-exchange=true # for increasing BT speed listen-port=51413 #follow-torrent=true #bt-max-peers=55 #dht-listen-port=6881-6999 #bt-enable-lpd=false #bt-request-peer-speed-limit=50K peer-id-prefix=-TR2770- user-agent=Transmission/2.77 seed-ratio=0 #force-save=false #bt-hash-check-seed=true bt-seed-unverified=true bt-save-metadata=true bt-tracker=http://93.158.213.92:1337/announce,udp://151.80.120.114:2710/announce,udp://62.210.97.59:1337/announce,udp://188.241.58.209:6969/announce,udp://80.209.252.132:1337/announce,udp://208.83.20.20:6969/announce,udp://185.181.60.67:80/announce,udp://194.182.165.153:6969/announce,udp://37.235.174.46:2710/announce,udp://5.206.3.65:6969/announce,udp://89.234.156.205:451/announce,udp://92.223.105.178:6969/announce,udp://51.15.40.114:80/announce,udp://207.241.226.111:6969/announce,udp://176.113.71.60:6961/announce,udp://207.241.231.226:6969/announce然后启动aria2:$ sudo aria2c --conf-path=/home/pi/.config/aria2/aria2.config Exception caught Exception: [download_helper.cc:563] errorCode=1 Failed to open the file ~/.config/aria2/aria2.session, cause: File not found or it is a directory结果出现错误,这是因为找不到aria2.session文件导致的,应该是无法识别“~”目录造成的,所以解决办法也很简单,将配置文件中的“~”修改为“/home/pi”即可。修改后再次启动aria2:$ sudo aria2c --conf-path=/home/pi/.config/aria2/aria2.config 03/19 13:35:47 [NOTICE] IPv4 RPC: listening on TCP port 6800 03/19 13:35:47 [NOTICE] IPv6 RPC: listening on TCP port 6800可以看到aria2已经成功启动了!三、配置aria2开机启动创建systemctl service文件sudo nano /lib/systemd/system/aria2.serviceUser,conf-path下换成自己的username[Unit] Description=Aria2 Service After=network.target [Service] User=pi ExecStart=/usr/bin/aria2c --conf-path=/home/pi/.config/aria2/aria2.config [Install] WantedBy=default.target重载服务并设置开机启动sudo systemctl daemon-reload sudo systemctl enable aria2 sudo systemctl start aria2 sudo systemctl status aria2看到如下文字证明启动成功(记住TCP port,AiraNg配置以及公网端口映射需要)$ sudo systemctl status aria2 ● aria2.service - Aria2 Service Loaded: loaded (/lib/systemd/system/aria2.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-03-19 13:44:39 CST; 5s ago Main PID: 6798 (aria2c) Tasks: 1 (limit: 2200) Memory: 3.4M CGroup: /system.slice/aria2.service └─6798 /usr/bin/aria2c --conf-path=/home/pi/.config/aria2/aria2.config Mar 19 13:44:39 raspberrypi systemd[1]: Started Aria2 Service. Mar 19 13:44:39 raspberrypi aria2c[6798]: 03/19 13:44:39 [NOTICE] IPv4 RPC: listening on TCP port 6800 Mar 19 13:44:39 raspberrypi aria2c[6798]: 03/19 13:44:39 [NOTICE] IPv6 RPC: listening on TCP port 6800四、安装AriaNg以在网页上进行下载管理AriaNg 是一个让 aria2 更容易使用的现代 Web 前端. AriaNg 使用纯 html & javascript 开发, 所以其不需要任何编译器或运行环境. 只要将 AriaNg 放在 Web 服务器里并在浏览器中打开即可使用. AriaNg 使用响应式布局, 支持各种计算机或移动设备.安装AriaNg的前提是树莓派上已经配置好了web环境,如果没有,按照树莓派安装 lnmp 套件搭建个人博客网站服务器 的教程,在树莓派上安装nginx软件(⚠️注意:只需要安装nginx即可)。安装AriaNg在这里选择最新版本的AriaNg.cd /var/www/html wget https://github.com/mayswind/AriaNg/releases/download/1.0.0/AriaNg-1.0.0.zip unzip AriaNg-1.0.0.zip -d aira在浏览器中访问http://your-ip/aira即可打开AriaNg了。这时AriaNg显示未连接,在“系统设置-(PRC192.168.0.108)-Aria2 PRC 密钥 ”中,输入“secret” 即可连接!之后,就可以愉快的用树莓派下载电影或者文件了~[ ](http://www.lxx1.com/4469/截屏2020-03-19下午10-15-50)
2020年12月18日
658 阅读
0 评论
0 点赞
2020-12-18
树莓派创建AP变身无线路由器
树莓派创建AP变身无线路由器树莓派从3代开始,就带有无线Wi-Fi模块。除了连接无线Wi-Fi上网外,树莓派还可以开启AP模式,使得树莓派变为无线路由器,这样就可以通过树莓派共享的网络上网。PS:树莓派的无线信号超级差,仅限实验或者应急,当然轻度使用也是冒得问题。以下是在树莓派上开启AP的方法步骤。一、使用的设备树莓派 3 1个Raspbian系统已经安装好。二、安装AP软件1.安装依赖包:$ sudo apt-get install util-linux procps hostapd iproute2 iw haveged dnsmasq2.安装软件git clone https://github.com/oblique/create_ap cd create_ap sudo make install提示如下是正常的…没细看,差点以为出错了:$ sudo make install install -Dm755 create_ap /usr/bin/create_ap install -Dm644 create_ap.conf /etc/create_ap.conf [ ! -d /lib/systemd/system ] || install -Dm644 create_ap.service /usr/lib/systemd/system/create_ap.service [ ! -e /sbin/openrc-run ] || install -Dm755 create_ap.openrc /etc/init.d/create_ap install -Dm644 bash_completion /usr/share/bash-completion/completions/create_ap install -Dm644 README.md /usr/share/doc/create_ap/README.md三、创建AP创建一个WPA + WPA2密码的Wi-Fi网络:create_ap wlan0 eth0 pi 12345678该命令在wlan0通道上创建一个名为pi的无线网络,密码为12345678.这样无线网络创建完成。打开手机即可连接。实测距离树莓派5米远,中间没有阻挡,手机连接树莓派无线网络,信号只有1格,但是连接上之后刷网页、看视频都冒得问题。四、拓展在github项目上,给出了使用例子:无密码(开放网络):create_ap wlan0 eth0 MyAccessPointWPA + WPA2密码:create_ap wlan0 eth0 MyAccessPoint MyPassPhrase没有Internet共享的AP:create_ap -n wlan0 MyAccessPoint MyPassPhrase桥接互联网共享:create_ap -m bridge wlan0 eth0 MyAccessPoint MyPassPhrase桥接Internet共享(预配置的桥接接口):create_ap -m bridge wlan0 br0 MyAccessPoint MyPassPhrase通过相同的WiFi接口进行Internet共享:create_ap wlan0 wlan0 MyAccessPoint MyPassPhrase选择其他WiFi适配器驱动程序create_ap --driver rtl871xdrv wlan0 eth0 MyAccessPoint MyPassPhrase没有使用管道的密码(开放网络):echo -e "MyAccessPoint" | create_ap wlan0 eth0使用管道的WPA + WPA2密码短语:echo -e "MyAccessPoint\nMyPassPhrase" | create_ap wlan0 eth0启用IEEE 802.11ncreate_ap --ieee80211n --ht_capab '[HT40+]' wlan0 eth0 MyAccessPoint MyPassPhrase客户端隔离:create_ap --isolate-clients wlan0 eth0 MyAccessPoint MyPassPhrase系统服务使用持久化的systemd服务立即启动服务:systemctl start create_ap开机启动:systemctl enable create_ap
2020年12月18日
759 阅读
0 评论
0 点赞
2020-12-18
树莓派通过命令行设置静态IP
树莓派通过命令行设置静态IP一、 确认Wi-Fi连接接口名称ifconfig二、配置sudo nano /etc/dhcpcd.conf在此文件中,您需要在文件末尾添加以下几行。interface wlan0 static ip_address=192.168.1.115/24 static routers=192.168.1.1 static domain_name_servers=192.168.1.1解释:interface wlan0 –此行定义了我们要修改其配置的接口。如果您的无线连接未在wlan0上运行,请确保在此处更改接口名称。static ip_address = 192.168.1.115 / 24** –这是您希望DHCP从网络获取的IP地址和大小(/ 24)。确保这是一个未使用的地址,否则会出现冲突问题。static routers= 192.168.1.1** –此行定义路由器(或网关)的IP地址。确保此地址与路由器的IP地址匹配,以便DHPCD知道连接位置。static domain_name_servers = 192.168.1.1 –此行定义DHCP守护程序将用于此接口的DNS服务器地址。通常,可以将其设置为路由器的IP地址。三、重启验证sudo reboot hostname -I
2020年12月18日
705 阅读
0 评论
0 点赞
2020-12-18
树莓派上安装Samba实现文件共享
树莓派上安装Samba实现文件共享树莓派上使用samba服务可方便快捷实现文件共享。SMB(Server Message Block)通信协议是微软和英特尔在1987年制定的协议,主要是作为Microsoft网络的通讯协议,其不仅提供目录和打印机共享,还支持认证、权限设置。本文是树莓派上安装samba的教程,使得树莓派在Windows和Mac等系统间的文件共享变得方便,同时,由于现在的电视基本上都支持Samba协议,所以树莓派安装Samba后,就可以在电视上直接看存储在树莓派上的电影啦。一、安装Samba使用如下命令更新源,并且安装Samba软件。sudo apt-get update && sudo apt-get install -y samba等待一会,Samba软件就会安装成功。可以查看安装的版本$ samba --version Version 4.9.5-Debian二、配置Sambasamba的配置文件是/etc/samba/smb.conf,要配置Samba,就需要修改这个配置文件。我们以共享树莓派上的/smaba文件夹为例,在配置文件中,找到[printers]`这一行,在这一行前面添加以下几行代码:[public] comment = public path path = /smaba guest ok = yes browseable = yes writeable = yes create mask = 0777 directory mask = 0777保存配置文件。然后启动samba服务。sudo samba
2020年12月18日
745 阅读
0 评论
0 点赞
2020-12-18
用树莓派来挖矿(莱特币LTC)
用树莓派来挖矿(莱特币LTC)零、注册账号首先注册莱特币LTC的钱包,也可以直接在蚂蚁矿池里以子账号的方式挖,这里我选择的是第二种方法。注册蚂蚁矿池后,添加一个子账号,币种选择莱特币。一、安装树莓派挖矿软件在树莓派上使用 Pooler/cpuminer 挖矿程序,首先更新树莓派,安装工具包。更新之前记得修改树莓派的源,不然速度很慢:树莓派系统常用中文镜像源。sudo apt-get update sudo apt-get install gcc g++ libstdc++-8-dev libpcre3-dev libcurl3-dev make下载cpuminer挖矿软件,并且解压缩。wget https://github.com/pooler/cpuminer/releases/download/v2.5.0/pooler-cpuminer-2.5.0.tar.gz tar zxvf pooler-cpuminer-2.5.0.tar.gz 接着编译挖矿软件cd cpuminer-2.5.0/ ./configure make sudo make install编译成功编译后可以检查是否安装成功,使用“minerd –v”命令。 $ minerd --v cpuminer 2.5.0 built on Feb 15 2020 features: ARM ARMv5E libcurl/7.64.0 OpenSSL/1.1.1c zlib/1.2.11 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.5) libssh2/1.8.0 nghttp2/1.36.0 librtmp/2.3 二、树莓派开始挖矿输入以下命令开始挖矿:minerd -o stratum+tcp://stratum-ltc.antpool.com:8888 -O jupiterLTC.1:密码注意:其中-o参数后面是数字货币的矿池地址,-O参数后是矿机名:密码,矿机名的命令规则是“用户名.任意数字”,密码可以为空。这样树莓派就开始挖矿了!
2020年12月18日
1,031 阅读
0 评论
0 点赞