活动介绍
file-type

TensorFlow和Keras模型部署及客户端访问教程

下载需积分: 10 | 230KB | 更新于2025-02-27 | 63 浏览量 | 1 下载量 举报 收藏
download 立即下载
在本节内容中,我们将深入了解使用TensorFlow和Keras框架如何将深度学习模型从开发、训练、冻结到部署,最终实现客户端访问的完整流程。这个过程中涉及到了模型的保存、转换以及客户端的交互等多个知识点。 首先,标题中提到的“save_model_test.tar.gz”是一个压缩包文件,里面包含了模型开发和部署过程中各个阶段的关键代码和文件。这个文件的结构和内容将是我们理解整个项目流程的出发点。 ### 知识点一:TensorFlow框架基础 TensorFlow是由Google开发的开源机器学习库,广泛应用于各种深度学习和机器学习项目中。它支持各种设备和平台,从服务器端到嵌入式设备,并提供了强大的API进行模型构建、训练和部署。TensorFlow的核心概念包括张量(Tensor)、计算图(Graph)、会话(Session)等。 ### 知识点二:Keras框架介绍 Keras是一个高层神经网络API,它运行在TensorFlow之上,被设计成简洁、快速和模块化。Keras允许用户轻松地设计和实验不同的神经网络架构,极大地方便了模型的开发和迭代。Keras提供了易用的接口,使得构建模型和数据预处理更加简单。 ### 知识点三:模型冻结概念 在深度学习模型训练完成后,我们往往需要将模型部署到生产环境供客户端调用。此时需要将模型进行“冻结”,也就是将模型中的训练变量转换为常量,以减少模型文件大小并提升运行效率。在TensorFlow中,冻结模型通常指的是导出一个Protocol Buffers(.pb)格式的文件,该文件包含了模型的结构和权重。 ### 知识点四:模型的保存和加载 模型的保存和加载是机器学习开发中的重要步骤。在TensorFlow中,可以通过Saver对象来管理模型的保存和恢复。可以保存整个模型的结构、权重和训练配置信息等,使得可以随时加载模型进行预测或者继续训练。保存模型的文件通常是checkpoint文件,或者转换为pb模型文件。 ### 知识点五:部署模型 模型部署是指将训练好的模型转换为可供客户端使用的格式,并在服务器上运行,使得客户端能够发送请求并获取模型预测的结果。在TensorFlow中,部署模型通常需要将Keras模型转换为SavedModel格式或者直接转换为TensorFlow的静态图格式。转换后的模型可以被部署到服务器或者云平台上,并通过API接口与客户端进行通信。 ### 知识点六:客户端访问 客户端访问是指客户端应用通过某种通信协议(如HTTP、gRPC等)与部署好的模型服务进行交互,发送数据并接收预测结果的过程。在本例中,客户端代码应当包含发送HTTP请求的逻辑,处理返回的预测结果,并在用户界面上展示。 ### 知识点七:压缩包文件结构解析 给定的压缩包文件“save_model_test.tar.gz”中可能包含以下文件: - 模型代码文件:定义模型结构的代码文件,可能使用Keras API编写。 - 训练代码文件:包含模型训练逻辑的Python脚本。 - 冻结模型代码文件:转换和冻结模型为pb格式的脚本。 - 冻结好的模型文件:已经转换为pb格式的模型文件。 - 客户端代码文件:包含客户端与服务端交互逻辑的代码。 通过深入理解上述知识点,可以对整个项目流程有一个全面的认识。在实践中,需要将这些理论应用到具体的操作中,通过编写和运行相应的代码来完成模型的保存、转换、部署和访问。这不仅需要对TensorFlow和Keras有深刻的理解,还需要具备良好的编程能力和解决问题的能力。

相关推荐

filetype

. ├── cliff_distance_measurement │ ├── CMakeLists.txt │ ├── include │ │ └── cliff_distance_measurement │ ├── package.xml │ └── src │ ├── core │ ├── ir_ranging.cpp │ └── platform ├── robot_cartographer │ ├── config │ │ └── fishbot_2d.lua │ ├── map │ │ ├── fishbot_map.pgm │ │ └── fishbot_map.yaml │ ├── package.xml │ ├── readme.md │ ├── resource │ │ └── robot_cartographer │ ├── robot_cartographer │ │ ├── __init__.py │ │ └── robot_cartographer.py │ ├── rviz │ ├── setup.cfg │ └── setup.py ├── robot_control_service │ ├── bash │ │ └── pwm_control_setup.sh │ ├── CMakeLists.txt │ ├── config │ │ └── control_params.yaml │ ├── include │ │ └── robot_control_service │ ├── package.xml │ ├── readme.md │ └── src │ ├── control_client_camera.cpp │ ├── control_client_cliff.cpp │ ├── control_client_ir.cpp │ ├── control_client_ir_four.cpp │ ├── control_client_master.cpp │ ├── control_client_ros.cpp │ ├── control_client_ultrasonic.cpp │ ├── control_service.cpp │ ├── DirectMotorControl.cpp │ ├── PIDControl.cpp │ ├── publisher_control_view.cpp │ └── publisher_human_realized.cpp ├── robot_control_view │ ├── config │ │ └── icare_robot.rviz │ ├── __init__.py │ ├── launch │ │ └── start_init_view.launch.py │ ├── package.xml │ ├── resource │ │ └── robot_control_view │ ├── robot_control_view │ │ ├── app │ │ ├── blood_oxygen_pulse │ │ ├── __init__.py │ │ ├── __pycache__ │ │ ├── robot_automatic_cruise_server.py │ │ ├── robot_automatic_recharge_server.py │ │ ├── robot_automatic_slam_server.py │ │ ├── robot_blood_oxygen_pulse.py │ │ ├── robot_city_locator_node.py │ │ ├── robot_control_policy_server.py │ │ ├── robot_local_websocket.py │ │ ├── robot_log_clear_node.py │ │ ├── robot_main_back_server.py │ │ ├── robot_network_publisher.py │ │ ├── robot_network_server.py │ │ ├── robot_odom_publisher.py │ │ ├── robot_speech_server.py │ │ ├── robot_system_info_node.py │ │ ├── robot_ultrasonic_policy_node.py │ │ ├── robot_view_manager_node.py │ │ ├── robot_websockets_client.py │ │ ├── robot_websockets_server.py │ │ ├── robot_wifi_server_node.py │ │ ├── start_account_view.py │ │ ├── start_bluetooth_view.py │ │ ├── start_chat_view.py │ │ ├── start_clock_view.py │ │ ├── start_feedback_view.py │ │ ├── start_health_view.py │ │ ├── start_init_view.py │ │ ├── start_lifecycle_view.py │ │ ├── start_main_view.py │ │ ├── start_member_view.py │ │ ├── start_movie_view.py │ │ ├── start_music_view.py │ │ ├── start_radio_view.py │ │ ├── start_schedule_view.py │ │ ├── start_setting_view.py │ │ ├── start_test_view.py │ │ ├── start_view_manager.py │ │ ├── start_weather_view.py │ │ └── start_wifi_view.py │ ├── setup.cfg │ ├── setup.py │ ├── test │ │ ├── my_test.py │ │ ├── test_copyright.py │ │ ├── test_flake8.py │ │ └── test_pep257.py │ └── urdf │ ├── first_robot.urdf.xacro │ ├── fishbot.urdf │ ├── fishbot.urdf.xacro │ ├── fist_robot.urdf │ ├── icare_robot.urdf │ ├── icare_robot.urdf.xacro │ ├── ramand.md │ └── xacro_template.xacro ├── robot_costmap_filters │ ├── CMakeLists.txt │ ├── include │ │ └── robot_costmap_filters │ ├── launch │ │ ├── start_costmap_filter_info_keepout.launch.py │ │ ├── start_costmap_filter_info.launch.py │ │ └── start_costmap_filter_info_speedlimit.launch.py │ ├── package.xml │ ├── params │ │ ├── filter_info.yaml │ │ ├── filter_masks.yaml │ │ ├── keepout_mask.pgm │ │ ├── keepout_mask.yaml │ │ ├── keepout_params.yaml │ │ ├── speedlimit_params.yaml │ │ ├── speed_mask.pgm │ │ └── speed_mask.yaml │ ├── readme.md │ └── src ├── robot_description │ ├── launch │ │ └── gazebo.launch.py │ ├── package.xml │ ├── readme.md │ ├── resource │ │ └── robot_description │ ├── robot_description │ │ └── __init__.py │ ├── rviz │ │ └── urdf_config.rviz │ ├── setup.cfg │ ├── setup.py │ ├── urdf │ │ ├── fishbot_gazebo.urdf │ │ ├── fishbot_v0.0.urdf │ │ ├── fishbot_v1.0.0.urdf │ │ ├── test.urdf │ │ └── three_wheeled_car_model.urdf │ └── worlds │ └── empty_world.world ├── robot_interfaces │ ├── CMakeLists.txt │ ├── include │ │ └── robot_interfaces │ ├── msg │ │ ├── AlarmClockMsg.msg │ │ ├── CameraMark.msg │ │ ├── DualRange.msg │ │ ├── HuoerSpeed.msg │ │ ├── IrSensorArray.msg │ │ ├── IrSignal.msg │ │ ├── NavigatorResult.msg │ │ ├── NavigatorStatus.msg │ │ ├── NetworkDataMsg.msg │ │ ├── PoseData.msg │ │ ├── RobotSpeed.msg │ │ ├── SensorStatus.msg │ │ ├── TodayWeather.msg │ │ └── WifiDataMsg.msg │ ├── package.xml │ ├── readme.md │ ├── src │ └── srv │ ├── LightingControl.srv │ ├── MotorControl.srv │ ├── NewMotorControl.srv │ ├── SetGoal.srv │ ├── StringPair.srv │ ├── String.srv │ └── VoicePlayer.srv ├── robot_launch │ ├── config │ │ └── odom_imu_ekf.yaml │ ├── launch │ │ ├── start_all_base_sensor.launch.py │ │ ├── start_cartographer.launch.py │ │ ├── start_control_service.launch.py │ │ ├── start_navigation.launch.py │ │ ├── start_navigation_service.launch.py │ │ ├── start_navigation_speed_mask.launch.py │ │ ├── start_navigation_with_speed_and_keepout.launch.py │ │ ├── start_ros2.launch.py │ │ ├── test_camera_2.launch.py │ │ ├── test_camera.launch.py │ │ ├── test_car_model.launch.py │ │ ├── test_cliff.launch.py │ │ ├── test_ir.launch.py │ │ ├── test_self_checking.launch.py │ │ ├── test_video_multiplesing.launch.py │ │ └── test_visualization.launch.py │ ├── package.xml │ ├── readme.md │ ├── resource │ │ └── robot_launch │ ├── robot_launch │ │ └── __init__.py │ ├── setup.cfg │ └── setup.py ├── robot_navigation │ ├── config │ │ ├── nav2_filter.yaml │ │ ├── nav2_params.yaml │ │ └── nav2_speed_filter.yaml │ ├── maps │ │ ├── fishbot_map.pgm │ │ └── fishbot_map.yaml │ ├── package.xml │ ├── readme.md │ ├── resource │ │ └── robot_navigation │ ├── robot_navigation │ │ ├── __init__.py │ │ └── robot_navigation.py │ ├── setup.cfg │ └── setup.py ├── robot_navigation2_service │ ├── package.xml │ ├── readme.md │ ├── resource │ │ └── robot_navigation2_service │ ├── robot_navigation2_service │ │ ├── camera_follower_client.py │ │ ├── go_to_pose_service.py │ │ ├── __init__.py │ │ ├── leave_no_parking_zone_client_test_2.py │ │ ├── pose_init.py │ │ ├── real_time_point_client.py │ │ ├── recharge_point_client.py │ │ ├── repub_speed_filter_mask.py │ │ └── save_pose.py │ ├── setup.cfg │ └── setup.py ├── robot_sensor │ ├── bash │ │ └── isr_brushless.sh │ ├── CMakeLists.txt │ ├── config │ │ └── sensor_params.yaml │ ├── include │ │ └── robot_sensor │ ├── package.xml │ ├── readme.md │ └── src │ ├── robot_battery_state_publisher.cpp │ ├── robot_battery_voltage_publisher.cpp │ ├── robot_charging_status_publisher.cpp │ ├── robot_cliff_distance_publisher.cpp │ ├── robot_encode_speed_publisher.cpp │ ├── robot_imu_publisher.cpp │ ├── robot_ir_four_signal_publisher.cpp │ ├── robot_ir_signal_publisher.cpp │ ├── robot_keyboard_control_publisher.cpp │ ├── robot_lighting_control_server.cpp │ ├── robot_map_publisher.cpp │ ├── robot_odom_publisher.cpp │ ├── robot_smoke_alarm_publisher.cpp │ ├── robot_ultrasonic_publisher.cpp │ └── robot_wireless_alarm_publisher.cpp ├── robot_sensor_self_check │ ├── check_report │ │ ├── sensor_diagnostic_report_20250226_144435.json │ │ ├── sensor_diagnostic_report_20250226_144435.txt │ │ ├── sensor_diagnostic_report_20250226_144850.json │ │ ├── sensor_diagnostic_report_20250226_144850.txt │ │ ├── sensor_diagnostic_report_20250226_144927.json │ │ ├── sensor_diagnostic_report_20250226_144927.txt │ │ ├── sensor_diagnostic_report_20250226_144958.json │ │ └── sensor_diagnostic_report_20250226_144958.txt │ ├── config │ │ └── sensors_config.yaml │ ├── package.xml │ ├── resource │ │ └── robot_sensor_self_check │ ├── robot_sensor_self_check │ │ ├── __init__.py │ │ ├── robot_sensor_self_check.py │ │ └── test_topic.py │ ├── setup.cfg │ ├── setup.py │ └── test │ ├── test_copyright.py │ ├── test_flake8.py │ └── test_pep257.py ├── robot_visual_identity │ ├── cfg │ │ ├── nanotrack.yaml │ │ ├── rknnconfig.yaml │ │ └── stgcnpose.yaml │ ├── face_feature │ │ ├── mss_face_encoding.npy │ │ ├── wd_face_encoding.npy │ │ └── yls_face_encoding.npy │ ├── package.xml │ ├── resource │ │ ├── robot_visual_identity │ │ └── ros_rknn_infer │ ├── rknn_model │ │ ├── blood_detect.rknn │ │ ├── blood-seg-last-cbam.rknn │ │ ├── face_detect.rknn │ │ ├── face_emotion.rknn │ │ ├── face_keypoint.rknn │ │ ├── face_verify.rknn │ │ ├── head_detect.rknn │ │ ├── nanotrack_backbone127.rknn │ │ ├── nanotrack_backbone255.rknn │ │ ├── nanotrack_head.rknn │ │ ├── people_detect.rknn │ │ ├── stgcn_pose.rknn │ │ ├── yolo_kpt.rknn │ │ └── yolov8s-pose.rknn │ ├── robot_visual_identity │ │ ├── 人体跟随与避障控制系统文档.md │ │ ├── __init__.py │ │ ├── rknn_infer │ │ ├── robot_behavior_recognition.py │ │ ├── robot_emotion_recognition.py │ │ ├── robot_people_rgb_follow.py │ │ ├── robot_people_scan_follow.py │ │ └── robot_people_track.py │ ├── setup.cfg │ ├── setup.py │ └── test │ ├── test_copyright.py │ ├── test_flake8.py │ └── test_pep257.py ├── video_multiplexing │ ├── bash │ │ ├── test_config.linphonerc │ │ ├── test_video_stream.sh │ │ └── video_stream.pcap │ ├── COLCON_IGNORE │ ├── package.xml │ ├── resource │ │ └── video_multiplexing │ ├── setup.cfg │ ├── setup.py │ ├── test │ │ ├── test_copyright.py │ │ ├── test_flake8.py │ │ └── test_pep257.py │ └── video_multiplexing │ ├── __init__.py │ ├── __pycache__ │ ├── rtp_utils.py │ ├── video_freeswitch.py │ ├── video_linphone_bridge.py │ ├── video_publisher.py │ └── video_test_freeswitch.py └── ydlidar_ros2_driver-humble ├── CMakeLists.txt ├── config │ └── ydlidar.rviz ├── details.md ├── images │ ├── cmake_error.png │ ├── EAI.png │ ├── finished.png │ ├── rviz.png │ ├── view.png │ └── YDLidar.jpg ├── launch │ ├── ydlidar_launch.py │ ├── ydlidar_launch_view.py │ └── ydlidar.py ├── LICENSE.txt ├── package.xml ├── params │ └── TminiPro.yaml ├── README.md ├── src │ ├── ydlidar_ros2_driver_client.cpp │ └── ydlidar_ros2_driver_node.cpp └── startup └── initenv.sh 93 directories, 299 files 我的机器人ros2系统是有显示和主控页面的居家服务型移动机器人,用户点击下载更新就开始执行更新流程,整个系统更新功能应该怎么设计,在开发者应该编写哪些代码和做哪些准备,如何设计流程

filetype

(myenv) (base) E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline>python E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pipeline.py --config E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\config\infer_cfg_ppvehicle.yml --image_file "E:\屏幕截图_22-7-2025_94834_cn.bing.jpg" --device=gpu 信息: 用提供的模式无法找到文件。 E:\anaconda\ana\myenv\Lib\site-packages\paddle\utils\cpp_extension\extension_utils.py:715: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md warnings.warn(warning_message) Warning: Unable to use motmetrics in MTMCT in PP-Tracking, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics Warning: Unable to use motmetrics in MTMCT in PP-Tracking, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics ----------- Running Arguments ----------- DET: batch_size: 1 model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip LANE_SEG: lane_seg_config: deploy/pipeline/config/lane_seg_config.yml model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/pp_lite_stdc2_bdd100k.zip MOT: batch_size: 1 enable: false model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_ppvehicle.zip skip_frame_num: -1 tracker_config: deploy/pipeline/config/tracker_config.yml VEHICLE_ATTR: batch_size: 8 color_threshold: 0.5 enable: true model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip type_threshold: 0.5 VEHICLE_PLATE: det_limit_side_len: 736 det_limit_type: min det_model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_det_infer.tar.gz enable: false rec_batch_num: 6 rec_image_shape: - 3 - 48 - 320 rec_model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ch_PP-OCRv3_rec_infer.tar.gz word_dict_path: deploy/pipeline/ppvehicle/rec_word_dict.txt VEHICLE_PRESSING: enable: false VEHICLE_RETROGRADE: deviation: 23 enable: false fence_line: [] filter_horizontal_flag: true frame_len: 8 keep_right_flag: true move_scale: 0.01 sample_freq: 7 crop_thresh: 0.5 visual: true warmup_frame: 50 ------------------------------------------ Vehicle Attribute Recognition enabled DET model dir: C:\Users\郭柠菠同学/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_ppvehicle mot_model_dir model_dir: C:\Users\郭柠菠同学/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_ppvehicle 100%|█████████████████████████████████████████████████████████████████████████████| 6209/6209 [00:11<00:00, 561.88KB/s] VEHICLE_ATTR model dir: C:\Users\郭柠菠同学/.cache/paddle/infer_weights\vehicle_attribute_model LANE_SEG model dir: C:\Users\郭柠菠同学/.cache/paddle/infer_weights\pp_lite_stdc2_bdd100k ----------- Model Configuration ----------- Model Arch: PPLCNet Transform Order: --transform op: Resize --transform op: NormalizeImage --transform op: Permute -------------------------------------------- Traceback (most recent call last): File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pipeline.py", line 1321, in <module> main() File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pipeline.py", line 1306, in main pipeline = Pipeline(FLAGS, cfg) ^^^^^^^^^^^^^^^^^^^^ File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pipeline.py", line 97, in __init__ self.predictor = PipePredictor(args, cfg, self.is_video) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pipeline.py", line 388, in __init__ self.vehicle_attr_predictor = VehicleAttr.init_with_cfg( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\ppvehicle\vehicle_attr.py", line 98, in init_with_cfg return cls(model_dir=cfg['model_dir'], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\ppvehicle\vehicle_attr.py", line 72, in __init__ super(VehicleAttr, self).__init__( File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\pipeline\pphuman\attr_infer.py", line 73, in __init__ super(AttrDetector, self).__init__( File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\python\infer.py", line 108, in __init__ self.predictor, self.config = load_predictor( ^^^^^^^^^^^^^^^ File "E:\paddleDetection\PaddleDetection-release-2.8.1\deploy\python\infer.py", line 1010, in load_predictor config = Config(model_path, model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: (NotFound) Cannot open file C:\Users\郭柠菠同学/.cache/paddle/infer_weights\vehicle_attribute_model, please confirm whether the file is normal. [Hint: Expected paddle::inference::IsFileExists(prog_file_) == true, but received paddle::inference::IsFileExists(prog_file_):0 != true:1.] (at ..\paddle\fluid\inference\api\analysis_config.cc:117)

filetype

import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader import numpy as np import matplotlib.pyplot as plt import pandas as pd from thop import profile import time import os from torch.utils.tensorboard import SummaryWriter # 设备配置 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f"使用设备: {device}") # 数据预处理 def get_cifar100_dataloaders(batch_size=128, resolution=32): """获取CIFAR-100数据集加载器""" transform_train = transforms.Compose([ transforms.Resize((resolution, resolution)), transforms.RandomCrop(resolution, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)) ]) transform_test = transforms.Compose([ transforms.Resize((resolution, resolution)), transforms.ToTensor(), transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)) ]) # 使用用户指定的路径 data_path = r'C:\Users\89373\Desktop\imagenet\cifar-100-python' train_set = torchvision.datasets.CIFAR100( root=data_path, train=True, download=False, transform=transform_train) test_set = torchvision.datasets.CIFAR100( root=data_path, train=False, download=False, transform=transform_test) train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True) test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=4, pin_memory=True) return train_loader, test_loader # 深度可分离卷积模块 class DepthwiseSeparableConv(nn.Module): """深度可分离卷积实现""" def __init__(self, in_channels, out_channels, stride): super().__init__() # 深度卷积 (DW卷积) self.depthwise = nn.Sequential( nn.Conv2d(in_channels, in_channels, 3, stride, 1, groups=in_channels, bias=False), nn.BatchNorm2d(in_channels), nn.ReLU6(inplace=True) ) # 逐点卷积 (1x1卷积) self.pointwise = nn.Sequential( nn.Conv2d(in_channels, out_channels, 1, 1, 0, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU6(inplace=True) ) def forward(self, x): x = self.depthwise(x) return self.pointwise(x) # 针对CIFAR-100调整的MobileNet v1 class MobileNetV1_CIFAR100(nn.Module): """针对CIFAR-100调整的MobileNet v1架构""" def __init__(self, alpha=1.0, num_classes=100): super().__init__() def c(channels): return int(channels * alpha) # 宽度乘数α控制通道数 # 针对32x32输入调整的网络结构 self.features = nn.Sequential( # 初始卷积层 (调整stride为1以适应小尺寸图像) nn.Conv2d(3, c(32), 3, 1, 1, bias=False), nn.BatchNorm2d(c(32)), nn.ReLU6(inplace=True), # 深度可分离卷积层 (减少下采样次数) DepthwiseSeparableConv(c(32), c(64), 1), DepthwiseSeparableConv(c(64), c(128), 2), DepthwiseSeparableConv(c(128), c(128), 1), DepthwiseSeparableConv(c(128), c(256), 2), DepthwiseSeparableConv(c(256), c(256), 1), DepthwiseSeparableConv(c(256), c(512), 1), # 调整为stride=1 *[DepthwiseSeparableConv(c(512), c(512), 1) for _ in range(2)], # 减少层数 DepthwiseSeparableConv(c(512), c(1024), 2), DepthwiseSeparableConv(c(1024), c(1024), 1), nn.AdaptiveAvgPool2d(1) ) self.classifier = nn.Linear(c(1024), num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) return self.classifier(x) # 标准卷积网络用于比较 class StandardConvNet(nn.Module): """标准卷积网络用于与深度可分离卷积比较""" def __init__(self, num_classes=100): super().__init__() self.features = nn.Sequential( nn.Conv2d(3, 32, 3, 1, 1, bias=False), nn.BatchNorm2d(32), nn.ReLU6(inplace=True), nn.Conv2d(32, 64, 3, 1, 1, bias=False), nn.BatchNorm2d(64), nn.ReLU6(inplace=True), nn.Conv2d(64, 128, 3, 2, 1, bias=False), nn.BatchNorm2d(128), nn.ReLU6(inplace=True), nn.Conv2d(128, 128, 3, 1, 1, bias=False), nn.BatchNorm2d(128), nn.ReLU6(inplace=True), nn.Conv2d(128, 256, 3, 2, 1, bias=False), nn.BatchNorm2d(256), nn.ReLU6(inplace=True), nn.Conv2d(256, 256, 3, 1, 1, bias=False), nn.BatchNorm2d(256), nn.ReLU6(inplace=True), nn.Conv2d(256, 512, 3, 1, 1, bias=False), nn.BatchNorm2d(512), nn.ReLU6(inplace=True), nn.Conv2d(512, 512, 3, 1, 1, bias=False), nn.BatchNorm2d(512), nn.ReLU6(inplace=True), nn.Conv2d(512, 1024, 3, 2, 1, bias=False), nn.BatchNorm2d(1024), nn.ReLU6(inplace=True), nn.Conv2d(1024, 1024, 3, 1, 1, bias=False), nn.BatchNorm2d(1024), nn.ReLU6(inplace=True), nn.AdaptiveAvgPool2d(1) ) self.classifier = nn.Linear(1024, num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) return self.classifier(x) # 训练函数 def train_model(model, train_loader, test_loader, epochs=100, lr=0.01, experiment_name="baseline"): """训练并评估模型""" # 创建日志目录 log_dir = f"logs/{experiment_name}_{time.strftime('%Y%m%d_%H%M%S')}" os.makedirs(log_dir, exist_ok=True) writer = SummaryWriter(log_dir) model = model.to(device) optimizer = optim.RMSprop(model.parameters(), lr=lr, momentum=0.9, weight_decay=1e-5) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'max', patience=5, factor=0.5, verbose=True) criterion = nn.CrossEntropyLoss() best_acc = 0.0 history = {'train_loss': [], 'test_acc': []} for epoch in range(epochs): model.train() running_loss = 0.0 for i, (images, labels) in enumerate(train_loader): images, labels = images.to(device), labels.to(device) optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if (i + 1) % 100 == 0: print(f'Epoch [{epoch + 1}/{epochs}], Step [{i + 1}/{len(train_loader)}], Loss: {loss.item():.4f}') # 计算平均训练损失 avg_train_loss = running_loss / len(train_loader) history['train_loss'].append(avg_train_loss) writer.add_scalar('Loss/train', avg_train_loss, epoch) # 验证评估 model.eval() total, correct = 0, 0 with torch.no_grad(): for images, labels in test_loader: images, labels = images.to(device), labels.to(device) outputs = model(images) _, preds = torch.max(outputs, 1) total += labels.size(0) correct += (preds == labels).sum().item() acc = 100 * correct / total history['test_acc'].append(acc) writer.add_scalar('Accuracy/test', acc, epoch) print(f'Epoch {epoch + 1}/{epochs} | Train Loss: {avg_train_loss:.4f} | Test Acc: {acc:.2f}%') # 更新学习率 scheduler.step(acc) # 保存最佳模型 if acc > best_acc: best_acc = acc torch.save(model.state_dict(), f"{log_dir}/best_model.pth") writer.close() return best_acc, history # 评估函数 def evaluate_model(model, test_loader): """评估模型性能""" model.eval() total, correct = 0, 0 with torch.no_grad(): for images, labels in test_loader: images, labels = images.to(device), labels.to(device) outputs = model(images) _, preds = torch.max(outputs, 1) total += labels.size(0) correct += (preds == labels).sum().item() return 100 * correct / total # 计算模型复杂度和性能 def calculate_model_stats(model, input_size=(1, 3, 32, 32)): """计算模型参数和FLOPs""" input_tensor = torch.randn(input_size).to(device) model = model.to(device) # 计算FLOPs和参数 flops, params = profile(model, inputs=(input_tensor,), verbose=False) flops_m = flops / 1e6 # 百万FLOPs params_m = params / 1e6 # 百万参数 return flops_m, params_m # 生成实验结果表格 def generate_results_table(): """生成论文中的实验表格""" results = [] train_loader, test_loader = get_cifar100_dataloaders(resolution=32) # 实验1: 标准卷积 vs 深度可分离卷积 print("\n实验1: 标准卷积 vs 深度可分离卷积") # 标准卷积模型 std_model = StandardConvNet() std_flops, std_params = calculate_model_stats(std_model) std_acc = train_model(std_model, train_loader, test_loader, epochs=50, experiment_name="std_conv")[0] # 深度可分离卷积模型 ds_model = MobileNetV1_CIFAR100(alpha=1.0) ds_flops, ds_params = calculate_model_stats(ds_model) ds_acc = train_model(ds_model, train_loader, test_loader, epochs=50, experiment_name="ds_conv")[0] results.append({ '实验': '卷积类型比较', '模型': '标准卷积', '准确率(%)': std_acc, '参数(M)': f'{std_params:.2f}', 'FLOPs(M)': f'{std_flops:.1f}' }) results.append({ '实验': '卷积类型比较', '模型': '深度可分离', '准确率(%)': ds_acc, '参数(M)': f'{ds_params:.2f}', 'FLOPs(M)': f'{ds_flops:.1f}' }) # 实验2: 宽度乘数α的影响 print("\n实验2: 宽度乘数α的影响") for alpha in [1.0, 0.75, 0.5, 0.25]: model = MobileNetV1_CIFAR100(alpha=alpha) flops, params = calculate_model_stats(model) # 实际训练获取准确率 acc = train_model(model, train_loader, test_loader, epochs=50, experiment_name=f"alpha_{alpha}")[0] results.append({ '实验': '宽度乘数α', '模型': f'α={alpha}', '准确率(%)': acc, '参数(M)': f'{params:.2f}', 'FLOPs(M)': f'{flops:.1f}' }) # 实验3: 分辨率乘数β的影响 print("\n实验3: 分辨率乘数β的影响") base_model = MobileNetV1_CIFAR100(alpha=1.0) for resolution in [32, 24, 16]: # 加载不同分辨率的数据 res_train_loader, res_test_loader = get_cifar100_dataloaders(resolution=resolution) # 创建新模型实例 model = MobileNetV1_CIFAR100(alpha=1.0) flops, params = calculate_model_stats(model, input_size=(1, 3, resolution, resolution)) # 实际训练获取准确率 acc = train_model(model, res_train_loader, res_test_loader, epochs=50, experiment_name=f"res_{resolution}")[0] results.append({ '实验': '分辨率乘数β', '模型': f'β={resolution}', '准确率(%)': acc, '参数(M)': '1.00', # 参数数量不变 'FLOPs(M)': f'{flops:.1f}' }) return pd.DataFrame(results) # 可视化实验结果 def visualize_results(results_df): """可视化实验结果""" plt.figure(figsize=(15, 10)) # 卷积类型比较 plt.subplot(2, 2, 1) conv_df = results_df[results_df['实验'] == '卷积类型比较'] plt.bar(conv_df['模型'], conv_df['准确率(%)'], color=['blue', 'green']) plt.title('卷积类型比较 - 准确率') plt.ylabel('准确率(%)') # 宽度乘数影响 plt.subplot(2, 2, 2) alpha_df = results_df[results_df['实验'] == '宽度乘数α'] plt.plot(alpha_df['模型'], alpha_df['准确率(%)'], 'o-', label='准确率') plt.plot(alpha_df['模型'], alpha_df['参数(M)'].astype(float), 's--', label='参数(M)') plt.plot(alpha_df['模型'], alpha_df['FLOPs(M)'].astype(float), 'd-.', label='FLOPs(M)') plt.title('宽度乘数α的影响') plt.legend() # 分辨率乘数影响 plt.subplot(2, 2, 3) res_df = results_df[results_df['实验'] == '分辨率乘数β'] plt.plot(res_df['模型'], res_df['准确率(%)'], 'o-', label='准确率') plt.plot(res_df['模型'], res_df['FLOPs(M)'].astype(float), 's--', label='FLOPs(M)') plt.title('分辨率乘数β的影响') plt.legend() # 准确率-FLOPs权衡 plt.subplot(2, 2, 4) plt.scatter(results_df['FLOPs(M)'].astype(float), results_df['准确率(%)'], s=100) for i, row in results_df.iterrows(): plt.annotate(row['模型'], (float(row['FLOPs(M)']), row['准确率(%)']), xytext=(5, 5), textcoords='offset points') plt.xlabel('FLOPs(M)') plt.ylabel('准确率(%)') plt.title('准确率-FLOPs权衡') plt.tight_layout() plt.savefig('mobilenet_cifar100_results.png') plt.show() # 主函数 def main(): # 生成结果表格 results_df = generate_results_table() # 保存结果 results_df.to_csv('mobilenet_cifar100_results.csv', index=False) print(results_df) # 可视化结果 visualize_results(results_df) print("实验完成!结果已保存到mobilenet_cifar100_results.csv和mobilenet_cifar100_results.png") if __name__ == '__main__': main() D:\anaconda\python.exe D:\杨杏丽作业\pythonProject\ImageNet.py 2025-06-18 10:19:16.836858: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-06-18 10:19:18.662333: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 使用设备: cpu Traceback (most recent call last): File "D:\杨杏丽作业\pythonProject\ImageNet.py", line 399, in <module> main() File "D:\杨杏丽作业\pythonProject\ImageNet.py", line 386, in main results_df = generate_results_table() ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\杨杏丽作业\pythonProject\ImageNet.py", line 264, in generate_results_table train_loader, test_loader = get_cifar100_dataloaders(resolution=32) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\杨杏丽作业\pythonProject\ImageNet.py", line 40, in get_cifar100_dataloaders train_set = torchvision.datasets.CIFAR100( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda\Lib\site-packages\torchvision\datasets\cifar.py", line 69, in __init__ raise RuntimeError("Dataset not found or corrupted. You can use download=True to download it") RuntimeError: Dataset not found or corrupted. You can use download=True to download it 进程已结束,退出代码为 1

filetype

如何解决:(rknn2) test@rknn:~/fastDeploy/python$ python setup.py build fatal: not a git repository (or any of the parent directories): .git running build running build_py running create_version running cmake_build Decompress file /home/test/fastDeploy/python/.setuptools-cmake-build/patchelf-0.15.0-aarch64.tar.gz ... -- Use the default onnxruntime lib. The ONNXRuntime path: /home/test/fastDeploy/python/.setuptools-cmake-build/third_libs/install/onnxruntime CMake Warning (dev) in cmake/paddle_inference.cmake: A logical block opening on the line /home/test/fastDeploy/cmake/paddle_inference.cmake:71 (if) closes on the line /home/test/fastDeploy/cmake/paddle_inference.cmake:132 (endif) with mis-matching arguments. Call Stack (most recent call first): CMakeLists.txt:245 (include) This warning is for project developers. Use -Wno-dev to suppress it. CMake Error at cmake/paddle_inference.cmake:96 (message): Paddle Backend doesn't support linux aarch64 now. Call Stack (most recent call first): CMakeLists.txt:245 (include) -- Configuring incomplete, errors occurred! See also "/home/test/fastDeploy/python/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log". Traceback (most recent call last): File "setup.py", line 465, in <module> license='Apache 2.0') File "/home/test/miniconda3/envs/rknn2/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/test/miniconda3/envs/rknn2/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/home/test/miniconda3/envs/rknn2/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/home/test/miniconda3/envs/rknn2/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/test/miniconda3/envs/rknn2/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/test/miniconda3/envs/rknn2/lib/python3.6/distuti

zh3389
  • 粉丝: 183
上传资源 快速赚钱