file-type

虚拟机中安装Ubuntu详细指南

MD文件

下载需积分: 50 | 7KB | 更新于2024-09-08 | 186 浏览量 | 5 评论 | 1 下载量 举报 收藏
download 立即下载
"该文档是关于在虚拟机中安装Ubuntu的操作指南,主要涵盖了安装前的准备、基本安装过程、设置语言环境以及安装常用软件的步骤。" 在虚拟机中安装Ubuntu是一个常见且实用的方法,特别是在学习和测试新软件或配置时,避免对主机系统造成影响。以下是详细的安装和配置步骤: 1. **安装前的准备和基本安装** - 首先,你需要从Ubuntu官方网站(http://cn.ubuntu.com/download)下载Ubuntu 16.04的ISO镜像文件。 - 安装虚拟机软件,例如VMware。虚拟机允许你在现有操作系统上运行Ubuntu,无需额外的硬件。使用虚拟机的优势在于你可以安全地进行各种实验,因为虚拟机内的任何更改都不会影响到主机系统。 - 在开始安装Ubuntu之前,确保在虚拟机设置中勾选“在虚拟机中访问个人文件夹”选项,以便于在虚拟机和主机之间共享文件。 2. **设置语言环境** - 安装完成后,首次登录Ubuntu系统,可以通过系统设置来调整语言环境。进入“系统设置” -> “语言支持”,然后安装“简体中文”。 - 将“汉语”移动到语言列表的顶部,以便系统默认使用中文。 - 设置完毕后,记得重新启动系统。在语言选择过程中,不要修改文件夹的名称,以保持系统的稳定性。 3. **安装常用软件** - 在Ubuntu中,软件安装通常通过`apt`包管理器进行,它连接到Ubuntu的服务器获取软件包。由于官方服务器在国外,可能会影响安装速度。因此,你可以通过设置镜像源来加快软件的下载和更新。这包括选择一个靠近你地理位置的服务器,以提高网络速度。 - 为了便于日常使用,可以在启动栏添加“终端”快捷方式,方便快速打开终端窗口。 - 使用`apt`命令可以安装各种软件,例如输入`sudo apt update`来更新软件列表,`sudo apt install <软件名>`来安装指定软件。 - 对于常见的应用程序,如Google Chrome浏览器和搜狗输入法,同样可以通过`apt`安装或从Ubuntu软件中心搜索并安装。 这个指南详细介绍了在虚拟机中搭建Ubuntu环境的基本流程,对于初学者来说是一份很好的参考。通过遵循这些步骤,你可以轻松地在虚拟环境中体验和使用Ubuntu操作系统。

相关推荐

filetype

pi@rk3318-box:/root$ ~/klipper/scripts/install-octopi-dependencies.sh bash: /home/pi/klipper/scripts/install-octopi-dependencies.sh: No such file or directory pi@rk3318-box:/root$ cd /klipper/scripts/ bash: cd: /klipper/scripts/: No such file or directory pi@rk3318-box:/root$ cd ~ pi@rk3318-box:~$ ls crowsnest kiauh-backups klippy-env OctoPrint fluidd klipper moonraker printer_data fluidd-config KlipperScreen moonraker-env timelapse kiauh klipper_tmc_autotune moonraker-timelapse pi@rk3318-box:~$ cd /klipper/scripts/ bash: cd: /klipper/scripts/: No such file or directory pi@rk3318-box:~$ cd /klipper/scripts bash: cd: /klipper/scripts: No such file or directory pi@rk3318-box:~$ cd /klipper bash: cd: /klipper: No such file or directory pi@rk3318-box:~$ ls crowsnest kiauh-backups klippy-env OctoPrint fluidd klipper moonraker printer_data fluidd-config KlipperScreen moonraker-env timelapse kiauh klipper_tmc_autotune moonraker-timelapse pi@rk3318-box:~$ cd klipper pi@rk3318-box:~/klipper$ ls config COPYING docs klippy lib Makefile README.md scripts src test pi@rk3318-box:~/klipper$ cd scripts/ pi@rk3318-box:~/klipper/scripts$ ls avrsim.py flash-sdcard.sh klipper-mcu.service buildcommands.py flash_usb.py klipper-pru-start.sh calibrate_shaper.py graph_accelerometer.py klipper-start.sh canbus_query.py graph_extruder.py klipper-uninstall.sh check-gcc.sh graph_motion.py klippy-requirements.txt checkstack.py graph_shaper.py logextract.py check_whitespace.py graphstats.py make_version.py check_whitespace.sh graph_temp_sensor.py motan ci-build.sh install-arch.sh parsecandump.py ci-install.sh install-beaglebone.sh spi_flash Dockerfile install-centos.sh stepstats.py dump_mcu.py install-debian.sh test_klippy.py flash-ar100.py install-octopi.sh update_chitu.py flash-linux.sh install-ubuntu-18.04.sh update_mks_robin.py flash-pru.sh install-ubuntu-22.04.sh whconsole.py pi@rk3318-box:~/klipper/scripts$

filetype

root@JFPC1:~/Workspace/protobuf-21.12# make -j$(nproc) make all-recursive make[1]: Entering directory '/root/Workspace/protobuf-21.12' Making all in . make[2]: Entering directory '/root/Workspace/protobuf-21.12' make[2]: Leaving directory '/root/Workspace/protobuf-21.12' Making all in src make[2]: Entering directory '/root/Workspace/protobuf-21.12/src' CXX google/protobuf/wire_format.lo CXX google/protobuf/wrappers.pb.lo CXX google/protobuf/compiler/command_line_interface.lo CXX google/protobuf/compiler/code_generator.lo CXX google/protobuf/compiler/plugin.lo CXX google/protobuf/compiler/plugin.pb.lo CXX google/protobuf/compiler/subprocess.lo CXX google/protobuf/compiler/zip_writer.lo CXX google/protobuf/any_lite.lo CXX google/protobuf/compiler/main.o CXX google/protobuf/arenastring.lo CXX google/protobuf/arena.lo CXX google/protobuf/arenaz_sampler.lo CXX google/protobuf/extension_set.lo CXX google/protobuf/generated_enum_util.lo CXX google/protobuf/generated_message_tctable_lite.lo CXX google/protobuf/generated_message_util.lo CXX google/protobuf/implicit_weak_message.lo CXX google/protobuf/inlined_string_field.lo CXX google/protobuf/io/coded_stream.lo CXX google/protobuf/io/io_win32.lo CXX google/protobuf/io/strtod.lo CXX google/protobuf/io/zero_copy_stream.lo CXX google/protobuf/io/zero_copy_stream_impl.lo CXX google/protobuf/io/zero_copy_stream_impl_lite.lo CXX google/protobuf/map.lo CXX google/protobuf/message_lite.lo CXX google/protobuf/parse_context.lo CXX google/protobuf/repeated_field.lo CXX google/protobuf/repeated_ptr_field.lo CXX google/protobuf/stubs/bytestream.lo CXX google/protobuf/stubs/common.lo CXX google/protobuf/stubs/int128.lo CXX google/protobuf/stubs/status.lo CXX google/protobuf/stubs/statusor.lo 这里报

filetype
filetype

# CN-RMA: Combined Network with Ray Marching Aggregation for 3D Indoor Object Detection from Multi-view Images This repository is an official implementation of [CN-RMA](https://arxiv.org/abs/2403.04198). ## Results | dataset | [email protected] | [email protected] | config | | :-----: | :------: | :-----: | :------------------------------------------------------------: | | ScanNet | 58.6 | 36.8 | [config](./projects/configs/mvsdetection/ray_marching_scannet.py) | | ARKit | 67.6 | 56.5 | [config](./projects/configs/mvsdetection/ray_marching_arkit.py) | Configuration, data processing and running the entire project is complicated. We provide all the detection results, visualization results and checkpoints of the validation set of the two datasets at [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/d/90bd36fe6f024ad58497/). Since our preparing and training procedure is complicated, you can directly download our results for [ScanNet](https://cloud.tsinghua.edu.cn/f/c4cb78b7d935467c8855/?dl=1) and [ARKitScenes](https://cloud.tsinghua.edu.cn/f/4c77c67123ab46b58605/?dl=1), or directly use our pre-trained weights for [ScanNet](https://cloud.tsinghua.edu.cn/f/b518872d3f11483aa121/?dl=1) and [ARKitScenes](https://cloud.tsinghua.edu.cn/f/17df2aa67e50407bb555/?dl=1) to validate. ## Prepare * Environments Linux, Python==3.8, CUDA == 11.3, pytorch == 1.10.0, mmdet3d == 0.15.0, MinkowskiEngine == 0.5.4 This implementation is built based on the [mmdetection3d](https://github.com/open-mmlab/mmdetection3d) framework and can be constructed as the [install.md](./doc/install.md). * Data Follow the mmdet3d to process the ScanNet and ARKitScenes datasets. You can process those datasets following [scannet.md](./doc/scannet.md) and [arkit.md](./doc/arkit.md). * Pretrained weights The required pretrained weights are put at [here](https://cloud.tsinghua.edu.cn/d/0b3af9884b7841ae8398/). * After preparation, you will be able to see the following directory structure: ``` CN-RMA ├── mmdetection3d ├── projects │ ├── configs │ ├── mvsdetection ├── tools ├── data │ ├── scannet │ ├── arkit ├── doc │ ├── install.md │ ├── arkit.md │ ├── scannet.md │ ├── train_val.md ├── README.md ├── data_prepare ├── post_process ├── dist_test.sh ├── dist_train.sh ├── test.py ├── train.py ``` ## How to Run To evaluate our method on ScanNet, you can download the [final checkpoint](https://cloud.tsinghua.edu.cn/f/b518872d3f11483aa121/?dl=1), set the 'work_dir' of `projects/configs/mvsdetection/ray_marching_scannet.py` to your desired path, and run: ```shell bash dist_test.sh projects/configs/mvsdetection/ray_marching_scannet.py {scannet_best.pth} 4 ``` Similarly, to evaluate on ARKitScenes, you should download the [final checkpoint](https://cloud.tsinghua.edu.cn/f/17df2aa67e50407bb555/?dl=1), set the 'work_dir' of `projects/configs/mvsdetection/ray_marching_arkit.py` to your desired path, and run: ```shell bash dist_test.sh projects/configs/mvsdetection/ray_marching_arkit.py {arkit_best.pth} 4 ``` After this, you should do nms post-processing to the data by running: ```shell python ./post_process/nms_bbox.py --result_path {your_work_dir}/results ``` The pc_det_nms do not always work very well, if it fails, just run it again and again.... You can then evaluate the results by running ```shell ./post_process/evaluate_bbox.py --dataset {arkit/scannet} --data_path {your_arkit_or_scannet_source_path} --result_path {your_work_dir}/results ``` And you can visualize the results by running ```shell ./post_process/visualize_results.py --dataset {arkit/scannet} --data_path {your_arkit_or_scannet_source_path} --save_path {your_work_dir}/results ``` if the nms fails, you can discover many bounding boxes very close to each other on the visualized results, then you can run the nms again. Training the network from scratch is complicated. If you want to train the network from scratch, please follow [train_val.md](./doc/train_val.md) ## Citation If you find this project useful for your research, please consider citing: ```bibtex @InProceedings{Shen_2024_CVPR, author = {Shen, Guanlin and Huang, Jingwei and Hu, Zhihua and Wang, Bin}, title = {CN-RMA: Combined Network with Ray Marching Aggregation for 3D Indoor Object Detection from Multi-view Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {21326-21335} } ``` ## Contact If you have any questions, feel free to open an issue or contact us at [email protected]

资源评论
用户头像
挽挽深铃
2025.04.06
资料内容全面,适合初学者入门。💗
用户头像
whph
2025.02.28
详细介绍了在虚拟机中安装Ubuntu的全过程。
用户头像
陌陌的日记
2025.02.22
安装步骤清晰,语言环境设置明确。😁
用户头像
罗小熙
2025.02.17
对于希望了解乌班图的读者,这是一份实用的文档。
用户头像
代码深渊漫步者
2025.01.15
包含了安装常用软件的指南。
weixin_43591861
  • 粉丝: 0
上传资源 快速赚钱