用Matlab和Arduino构建4自由度机械臂(2)

电子森林 2020-03-20 00:00

一周前我转载了来自Circuit Cellar上的一篇技术含量非常高的文章:


用Matlab和Arduino构建4自由度机械臂(1)


今天续上第2节,同样也是英文的文章,转载在此,供学习技术的朋友阅读。原文来自Circuit Cellar官网,可以点击左下角的“阅读原文”直接阅读。


The Before and After Math

In Part 1, Raul discussed the general hardware and software aspects of his 4-DOF robotic arm project. In Part 2, he covers the topics related to the mathematical foundations of robotics. He then wraps up by talking about testing and debugging the robotic arm, and shares his thoughts regarding future improvements.

In Part 1 of this article (December 2019, Circuit Cellar 353), I presented the general hardware and software aspects of this project, including the electronics, the arm structure, the code for the embedded controller and relevant parts of the MATLAB code. I also talked about the PID controller implemented for joint 1 of the robotic arm, the serial command interface for receiving joint angle data in the embedded controller from the companion computer to control the robotic arm’s pose, how the MATLAB GUI interfaces with the application code, how to serially interface the companion computer with the embedded controller and how to plot the robotic arm’s pose in the MATLAB GUI.

In this second (and last) part, I’ll be discussing topics more related to the mathematical foundations of robotics, such as: configuration and task spaces, robot pose representation in three dimensions, homogeneous transformations, the Denavit-Hartenberg convention and forward and inverse kinematics. I will end by talking a bit about testing and debugging the robotic arm—and I’ll share my ideas about improvements.

CONFIGURATION AND TASK SPACES
The joint configuration of a robotic arm is the set of all joint angles that unequivocally describe their pose. It can be represented as a vector “q” containing all joint angles, as shown in Equation (1).

(1)

θ1: joint 1 angle
θ2: joint 2 angle
θ3: joint 3 angle
θ4: joint 4 angle

The configuration space of a robotic arm is the space of all possible joint configurations “q” that it can have. Their dimension is equal to the number of joints or degrees of freedom—in our case, four. The configuration space is constrained to a subset of all possible combinations of angles, because joints generally have physical limits, due to the arm structure design and motor rotation limits. Based on those limits, our robotic arm has constrained logical joint ranges, defined as follows:

θ1: [-90,+90] (joint 1)
θ2: [-25,+155] (joint 2)
θ3: [-135,+45] (joint 3)
θ4: [-90,+90] (joint 4)

For joints 2, 3 and 4, I’m using servo motors of 180 degrees, so in principle, these joints are physically constrained to a range between 0 and 180 degrees. However, for joint 1, I’m using a regular DC motor that can rotate in a range greater than the servo motors. I’m also constraining it to 180 degrees to uniform all ranges, while obtaining a reasonable work space for the robotic arm.

Now let’s explain how I defined those joint logical ranges. In the case of joint 2, for example, I took the 180 degrees of the servo motor’s physical range and defined a logical range of [-25, +155]. I did that after visually checking that’s the logical angle range that I wanted this joint to move in to obtain a reasonable work space. Then I did the same with the rest of the joints. Whenever the physical and logical ranges don’t coincide, there must be a conversion mapping between them. Figure 1 shows that’s the case for joints 2, 3 and 4.

FIGURE 1
Joint angle conversion mappings


ANGLE CONVERSIONS
When the embedded controller receives angles for these joints from the companion computer, it must convert the angles from their logical ranges (which are used in the companion computer for all operations, including kinematics calculations) to their corresponding physical ranges, before applying them to the motors. For instance, when joint 2 is at +155 degrees, that means its servo motor must be at 0 degrees. When it’s at -25 degrees, the servo motor must be physically at 180 degrees, and so on for all angles in between.

Aside from the fact that some joints must have specific range conversions, you can see in Figure 2 that although servo motors 2 and 3 are mounted in the same orientation, their physical ranges are reversed with respect to each other ([180, 0] vs. [0, 180]) in the conversion mapping. This is because when joint 2 rotates positively around its “z” axis (more on coordinate frames later), its servo motor goes toward zero physical degrees. But with joint 3 the opposite is true. You should check the orientation for each servo motor according your arm’s particular design, which, in turn, will define the orientation of their physical ranges in the conversion mapping (shown in Figure 1).

FIGURE 2
Robotic arm servo motor orientation

The Arduino “map” C language function allows an easy conversion of a value between two different ranges. For example the code line ang_conv = map(servo2_target_angle, SERVO2_INF_LIMIT, SERVO2_SUP_LIMIT, 180, 0) will perform the conversion for joint 2 shown in Figure 1. If, for any reason, your particular conversion mappings are different from the ones outlined in Figure 1, you should change all relevant “map” code lines in the embedded controller’s code to suit yours.

Figure 3 shows the pose of the robotic arm with all joints at zero logical degrees. This is also important to consider when mounting the motors in the mechanical structure. That’s because by following the Denavit-Hartenberg convention (which I’ll be discussing in more detail later) the zero-degree line of each motor must coincide with the “x” axis of its assigned coordinate frame. For example, applying the conversion mapping, zero logical degrees for joint 2 corresponds with 155 degrees of its servo motor. To mount joint 2 properly, we must put its servo motor at 155 degrees first, and then mount its link horizontally (Figure 3).

FIGURE 3
Robotic arm coordinate frame assignments

Finally, let’s quickly define “Task Space.” The Task Space of a robotic arm is defined as the set of all possible end effector poses. Their dimension is equal to the number of parameters used to describe the pose. In the case of our robotic arm, for simplicity I’m using a pose in three dimensions (x, y, z) without specifying orientation angle(s), so the task space dimension is three. Although it would also be possible to describe it by a set of four parameters, by adding a pitch orientation angle (x, y, z, θp) to get a task space dimension of four. In general, more than four parameters is also possible.

In a robotic arm, a coordinate frame is generally assigned to the end of each link, beginning with the base or “body” of the arm and ending with the tip of the end effector (Figure 3). Every subsequent coordinate frame can be described with respect to the previous one by a “transformation.” A transformation is a combination of a translational and a rotational component. Figure 4 describes a transformation from frame {1} to frame {2} in two and three dimensions. Figure 4a describes the transformation in two dimensions (“x” and “y”). In this figure, “n” is the distance of frame {2} from frame {1} and “γ” is the rotation of frame {2} with respect to frame {1} around the “z” axis. This is not shown in Figure 4a because it’s the axis perpendicular to the plane that comes out of the page. In three dimensions (Figure 4b), the transformation has a translational component “n” in three dimensions (“x”, “y” and “z”), and there are also two additional rotation angles, “α” around the “x” axis and “β” around the “y” axis.

FIGURE 4
Transformations. a) In two dimensions; b) in three dimensions

DENAVIT-HARTENBERG
The Denavit-Hartenberg convention I’m about to explain in this section can be challenging to understand for newcomers. If that’s true in your case, don’t be discouraged. It can take some time to absorb, analyze it thoroughly and to eventually grasp the concept. I would suggest you check some extra material available on the Internet if you want to deepen your understanding of this topic. It’s practically impossible to explain it in reasonable detail in a short article like this, but I’ll do my best.

There’s more than one way to attach frame references to links in a robotic arm. The so-called “Denavit-Hartenberg convention” defines a particular way to do it, and is a popular approach with robotic arms. In this convention, the basic idea is that frame references are attached to joints located at the link ends, such that the rotation component of the transformation must be defined around the “z” axis and is associated with the joint itself. On the other hand, the translation component of the transformation is defined along the “x” axis and is associated with the link length. Figure 3 shows the frame assignment to each joint, including the tip of the end effector in our robotic arm by following this convention.

More concretely, however, in the Denavit-Hartenberg convention, a total of four parameters are used to describe the transformation between two consecutive frames: two rotational parameters (“θ” and “α”) and two linear parameters (“r” and “d”). Figure 5 shows the Denavit-Hartenberg parameters for our robotic arm, along with a succinct description of their meaning. I’ll discuss this in greater depth next.

FIGURE 5
Denavit-Hartenberg parameters for the robotic arm

For a typical robotic arm with rotational joints like ours, the “θ” angles will be the joint rotation angles around their corresponding “z” axes (Figure 6); “r” will be the link lengths along the “x” axes for joints that rotate in a vertical plane (joints 2, 3 and 4 in our robotic arm), and for joint 1, which rotates in the horizontal plane, “r” is zero (there’s no link length in frame {1}’s “x” axis). The “α” angle is the rotation angle of a previous coordinate frame around the “x” axis of the next frame to align the former with the latter.

FIGURE 6
Denavit-Hartenberg parameters in relation to the arm structure

For our robotic arm, the transformation from frame {1} to frame {2} will have an “α“ of 90 degrees, because that’s the angle frame {1} must be rotated around the “x” axis of frame {2}, to align frame {1} with frame {2}. In other words, 90 degrees is the angle we must rotate frame {1} to align their “z” axis with the “z” axis of frame {2}. For the transformation between frame {2} and frame {3}, “α“ is zero, because both frames are already aligned (that is, their “z” axes are in the same direction), and the same applies to the next two transformations. All rotations in the convention follow the “right hand rule.”

The “d” parameter indicates the displacement of two consecutive coordinate frames along the “z” axis of the first of the two. The transformation from frame {1} to frame {2} has a “d” parameter equal to the distance between the first and second joints (that is, from the base of the robotic arm to joint 2), because that’s the distance between the two, along the “z” axis direction of frame {1}. However, the displacement of frame {3} from frame {2}, along the “z” axis of frame {2} is zero, and it is the same for subsequent transformations.

Generally, for our type of robotic arm, “α”, “r” and “d” will be constants, but you should change at least “r2”, “r3”, “r4” and “d1” in the MATLAB code to reflect the link lengths of your particular robotic arm structure. The Robotic_Arm_GUI_OpeningFcn() function located in the file “Robotic_Arm_GUI.m” contains all Denavit-Hartenberg constants.

A homogeneous transformation matrix makes it possible to describe mathematically the transformations between coordinate frames. Equation (2) shows how a Denavit-Hartenberg homogeneous transformation matrix is defined by using the four aforementioned parameters (“θ”, “α“, “r” and “d”). In the case of revolute joints, “θ” (the joint angle) is variable and “α“, “r” and “d” are generally constants. Because we have five coordinate frames attached to our robotic arm, we can build four transformation matrices that describe the pose of each subsequent coordinate frame with respect to the previous one, with each one built by using their specific parameters. For instance, to obtain the first homogeneous transformation matrix (transformation from frame {1} to frame {2}), we must use the parameters from row 1 in Figure 5, row 2 for the transformation from frame {2} to frame {3}, and so on.


n-1Tn : transformation from 0n-1 (framen-1) to 0n (framen)
R : rotation component
T : translation component

FORWARD KINEMATICS
Forward kinematics is the task of computing the position of the end effector from a given joint configuration—that is, a set of joint angles. Equation (3) gives us the general forward kinematics formula. To do so, we have to compute the homogeneous transformation matrix that describes the pose of the end effector (frame {5}) relative to the base of the robotic arm (frame {1}) by multiplying all four transformation matrices defined for consecutive coordinate frames, as described by Equation (4) and illustrated in Figure 7.



FIGURE 7
Robotic arm forward kinematics

The resulting transformation matrix from frame {1} to frame {5} gives us the position of the tip of the end effector with respect to the body frame; which will generally be the “world” frame or the coordinate frame, in which all required poses for the robotic arm will likely be given—for example, the coordinates of an object the robotic arm must pick up. So, for calculating the forward kinematics given a new set of joint angles, first we calculate the individual transformation matrices using the new “θ” joint angles, then we multiply them in the given sequence to obtain the new end effector position.

In the “Run_Plot_Fwd_Kinematics.m” file, the code line fwd_kin_result = Calc_Fwd_Kinematics(dh_params) calls the function that will compute the forward kinematics. This function (located in the file “Calc_Fwd_Kinematics.m”) will compute the four new transformation matrices and then multiply them to obtain the new end effector’s position.

If we calculate the partial derivatives of Equation (3) with respect to “θ”, we obtain Equation (5), which is called the Jacobian.

The Jacobian expresses the rate in change of the linear components in “x”, “y” and “z” of the moving tip of the end effector, with respect to the change in joint angles. So, the Jacobian basically tells us what a tiny displacement of the tip of the arm we expect to have in “x”, “y” and “z”, in relation to tiny increments/decrements of the joint angles. This is a particularly useful mathematical intuition that will help us understand the inverse kinematics method I’ll explain next.

INVERSE KINEMATICS
Inverse kinematics, as the name implies, is the inverse process of finding the joint angles that correspond to a given position of the end effector in 3D space. Figure 8 illustrates the concept and Equation (6) gives us the general mathematical formula.

FIGURE 8
Robotic arm inverse kinematics

The “Jacobian Transpose” and the “Jacobian Pseudo-inverse” are two well-known iterative methods for computing the inverse kinematics. Equation (7) shows the general formula for the Jacobian Transpose inverse kinematics method, which resembles Equations (5) and (6).

ΔX: delta of displacement in x , y, z
Δθ: delta of all joint angles

The Jacobian transpose method allows us to compute a tiny increment/decrement in joint angles that corresponds to a given tiny movement of the tip of the end effector in 3D space. Thus, it can be used iteratively to calculate the inverse kinematics necessary to take the end effector to a new position. To do so, we first calculate the Euclidean distance between the start (actual) position and the end (desired) position of the end effector. Then we extract its components in “x”, “y” and “z” and divide them by a factor of, say, 10 or 20. Because the Jacobian is a linear approximation, it will work better over tiny displacements. The vector of those three components will be our delta displacement in position “ΔX” (ΔX = [δ x, δ y, δ z]).

Second, we calculate the Jacobian matrix with the current joint configuration (that is, the set of start “θ” angles that correspond with the start position). Third, we multiply the Jacobian matrix by “ΔX” (and a scaling factor “α”) to obtain a vector “Δθ” (Δθ = [δθ1, δθ2, δθ3, δθ4]) that gives us the corresponding angle increment/decrement for each joint that will cause the aforementioned “delta” displacement in position of the arm’s tip—which will be the tenth or twentieth of the initial Euclidean distance. Fourth, after modifying the joint angles for a value of “Δθ” (i.e. θ = θ + Δθ), we take these new angles and the resulting intermediate position of the end effector as a start, and repeat the process iteratively, until the Euclidean distance is small enough (say, 10-3) for us to consider the error negligible for practical purposes.

The Jacobian pseudo-inverse kinematics method is a similar method, in which we use the pseudo-inverse of the Jacobian instead of the transpose. The pseudo-inverse is said to be computationally faster than the transpose, and the procedure and results are almost the same: We compute the Jacobian pseudo-inverse matrix and multiply it by “ΔX” to also obtain “Δθ”, and repeat that iteratively. Equation (8) shows the general formula for the Jacobian pseudo-inverse kinematics, and Equation (9) shows the formula of the Jacobian pseudo-inverse.


Code lines 572 through 596 from the button_calc_invkin_Callback() function in the “Robotic_Arm_GUI.m” file show the implementation of the Jacobian pseudo-inverse kinematics algorithm. Computing the pseudo-inverse or the transpose of a matrix in MATLAB is as easy as calling the “pinv” or the “transpose” MATLAB functions.

The start pose is the default pose of the robotic arm when the system is initiated. The pose at zero degrees for all joints (Figure 3 and Figure 6) isn’t particularly appealing as a start pose, for both practical and esthetic reasons. Figure 9 shows a more convenient one with a joint configuration q = {0, 90, -90, -90}. This start pose is programmed in the Robotic_Arm_GUI_OpeningFcn() function to be set by default when the GUI is opened. See code lines 114 through 140 in the aforementioned function.

FIGURE 9
Robotic arm start pose


TESTING THE ROBOTIC ARM
I highly recommended that you first test and debug the robotic arm by sending manual commands to the embedded controller. Once that works well, proceed to test it with the MATLAB code. To do so, connect the embedded controller to the companion computer and open the Arduino’s USB port from the Arduino IDE’s Serial Monitor (or another software terminal emulator), to manually send commands to the embedded controller and drive the joint motors. For instance, you can send the command “2$-13&” to select joint 2 and move it to -13 degrees (see Part 1 of this article for the command protocol specification). The embedded controller’s serial command interface also implements additional test commands. For instance, if you type “+” the current selected joint (for example, joint 2) will increase its angle by 10 degrees, and if you type “-” it will decrease it by the same amount.

It is also possible to send multiples of these commands, such as “+++” to increase the angle by 30 degrees, or “—-” to decrease it by 40 degrees and so on. A “c” command will take the current selected joint to the default angle (the start pose). If you want to change the current selected joint, just type the joint number, for example “3” to change to joint 3. If you want to change the current selected joint and at the same time move it, you can type, for example, “4++”. That will change the current joint to four, and then increase its angle by 20 degrees.

Once the robotic arm works well with manual commands, you can test it with the MATLAB code by running the “Robotic_Arm_GUI.m” file to open the GUI. Be sure to have the Arduino board’s USB cable connected to the companion computer and the assigned virtual serial port name changed in the Robotic_Arm_GUI_OpeningFcn() function before attempting to run the MATLAB code.

Meanwhile, the embedded controller outputs debugging messages serially via pin 6 in the Arduino board (check the circuit diagram from Part 1 of this article). You can connect a USB to UART interface’s serial input pin to pin 6 of the Arduino to receive debug messages in a software terminal emulator, and check the operation of the embedded controller at all times.

CONCLUSIONS AND UPGRADES
The arm structure and servo motors I used for the prototype don’t have a particularly stable or accurate performance. Nevertheless, the robotic arm performs pretty well for its stated purpose, which is to exemplify key concepts, such as PID control, controller/computer interfacing, forward/inverse kinematics and their implementation in code.

Some improvements I have in mind for this project are the following:

First, I would like to try to make the robotic arm draw on a surface. For that, the robotic arm would have drawing paths stored in arrays, as sequences of goal coordinates that it must follow one by one, until the last one is reached.

Second, I would like to try adding computer vision to the robotic arm, to be able to recognize objects, extract their location and automatically fetch and move them to other places.

Third, I would like to add two more degrees of freedom, optionally using at least one prismatic joint. It would be interesting as well to gradually replace the servo motors with DC motors, each one with its corresponding PID control loops. I could also try to build my own “smart servo motors,” by embedding to each DC motor its own H-bridge driver and microcontroller for the PID control loop.

Finally, I would like to convert the MATLAB companion computer code to the Python programming language, which is open source and free, and it’s preferred in some educational settings. By having a Python version, more people would potentially benefit from the work presented in this project. Converting the MATLAB code to Python—or any other language for that matter, is a good exercise I can recommend to anyone willing to delve deep into the algorithms described in this project. To convert the code correctly, you must reach a reasonable understanding level of the involved algorithms. And, while doing so, you’ll gain a reasonable understanding of the source and target languages. 

For detailed article references and additional resources go to:
www.circuitcellar.com/article-materials
References [1] through [6] as marked in the article can be found there.

最后预告一下今天下午的直播课程 - 入门PCB设计的正确姿势(2):电子产品的系统构成及关键元器件的选用。想学习PCB设计和FPGA编程的同学可以扫描下面的二维码,经我们的工作人员审核以后可以加入我们的课程交流群。

课程详情,请点击左下角的“阅读原文”。

电子森林 讲述电子工程师需要掌握的重要技能: PCB设计、FPGA应用、模拟信号链路、电源管理等等;不断刷新的行业新技术 - 树莓派、ESP32、Arduino等开源系统;随时代演进的热点应用 - 物联网、无人驾驶、人工智能....
评论 (0)
  • 在智能硬件设备趋向微型化的背景下,语音芯片方案厂商针对小体积设备开发了多款超小型语音芯片方案,其中WTV系列和WT2003H系列凭借其QFN封装设计、高性能与高集成度,成为微型设备语音方案的理想选择。以下从封装特性、功能优势及典型应用场景三个方面进行详细介绍。一、超小体积封装:QFN技术的核心优势WTV系列与WT2003H系列均提供QFN封装(如QFN32,尺寸为4×4mm),这种封装形式具有以下特点:体积紧凑:QFN封装通过减少引脚间距和优化内部结构,显著缩小芯片体积,适用于智能门铃、穿戴设备
    广州唯创电子 2025-04-30 09:02 206浏览
  • 浪潮之上:智能时代的觉醒    近日参加了一场课题的答辩,这是医疗人工智能揭榜挂帅的国家项目的地区考场,参与者众多,围绕着医疗健康的主题,八仙过海各显神通,百花齐放。   中国大地正在发生着激动人心的场景:深圳前海深港人工智能算力中心高速运转的液冷服务器,武汉马路上自动驾驶出租车穿行的智慧道路,机器人参与北京的马拉松竞赛。从中央到地方,人工智能相关政策和消息如雨后春笋般不断出台,数字中国的建设图景正在智能浪潮中徐徐展开,战略布局如同围棋
    广州铁金刚 2025-04-30 15:24 173浏览
  • 随着电子元器件的快速发展,导致各种常见的贴片电阻元器件也越来越小,给我们分辨也就变得越来越难,下面就由smt贴片加工厂_安徽英特丽就来告诉大家如何分辨的SMT贴片元器件。先来看看贴片电感和贴片电容的区分:(1)看颜色(黑色)——一般黑色都是贴片电感。贴片电容只有勇于精密设备中的贴片钽电容才是黑色的,其他普通贴片电容基本都不是黑色的。(2)看型号标码——贴片电感以L开头,贴片电容以C开头。从外形是圆形初步判断应为电感,测量两端电阻为零点几欧,则为电感。(3)检测——贴片电感一般阻值小,更没有“充放
    贴片加工小安 2025-04-29 14:59 201浏览
  • 文/Leon编辑/cc孙聪颖‍2023年,厨电行业在相对平稳的市场环境中迎来温和复苏,看似为行业增长积蓄势能。带着对市场向好的预期,2024 年初,老板电器副董事长兼总经理任富佳为企业定下双位数增长目标。然而现实与预期相悖,过去一年,这家老牌厨电企业不仅未能达成业绩目标,曾提出的“三年再造一个老板电器”愿景,也因市场下行压力面临落空风险。作为“企二代”管理者,任富佳在掌舵企业穿越市场周期的过程中,正面临着前所未有的挑战。4月29日,老板电器(002508.SZ)发布了2024年年度报告及2025
    华尔街科技眼 2025-04-30 12:40 189浏览
  • 在CAN总线分析软件领域,当CANoe不再是唯一选择时,虹科PCAN-Explorer 6软件成为了一个有竞争力的解决方案。在现代工业控制和汽车领域,CAN总线分析软件的重要性不言而喻。随着技术的进步和市场需求的多样化,单一的解决方案已无法满足所有用户的需求。正是在这样的背景下,虹科PCAN-Explorer 6软件以其独特的模块化设计和灵活的功能扩展,为CAN总线分析领域带来了新的选择和可能性。本文将深入探讨虹科PCAN-Explorer 6软件如何以其创新的模块化插件策略,提供定制化的功能选
    虹科汽车智能互联 2025-04-28 16:00 180浏览
  • 一、gao效冷却与控温机制‌1、‌冷媒流动设计‌采用低压液氮(或液氦)通过毛细管路导入蒸发器,蒸汽喷射至样品腔实现快速冷却,冷却效率高(室温至80K约20分钟,至4.2K约30分钟)。通过控温仪动态调节蒸发器加热功率,结合温度传感器(如PT100铂电阻或Cernox磁场不敏感传感器),实现±0.01K的高精度温度稳定性。2、‌宽温区覆盖与扩展性‌标准温区为80K-325K,通过降压选件可将下限延伸至65K(液氮模式)或4K(液氦模式)。可选配475K高温模块,满足材料在ji端温度下的性能测试需求
    锦正茂科技 2025-04-30 13:08 195浏览
  • 文/郭楚妤编辑/cc孙聪颖‍越来越多的企业开始蚕食动力电池市场,行业“去宁王化”态势逐渐明显。随着这种趋势的加强,打开新的市场对于宁德时代而言至关重要。“我们不希望被定义为电池的制造者,而是希望把自己称作新能源产业的开拓者。”4月21日,在宁德时代举行的“超级科技日”发布会上,宁德时代掌门人曾毓群如是说。随着宁德时代核心新品骁遥双核电池的发布,其搭载的“电电增程”技术也走进业界视野。除此之外,经过近3年试水,宁德时代在换电业务上重资加码。曾毓群认为换电是一个重资产、高投入、长周期的产业,涉及的利
    华尔街科技眼 2025-04-28 21:55 148浏览
  • 4月22日下午,备受瞩目的飞凌嵌入式「2025嵌入式及边缘AI技术论坛」在深圳深铁皇冠假日酒店盛大举行,此次活动邀请到了200余位嵌入式技术领域的技术专家、企业代表和工程师用户,共享嵌入式及边缘AI技术的盛宴!1、精彩纷呈的展区产品及方案展区是本场活动的第一场重头戏,从硬件产品到软件系统,从企业级应用到高校教学应用,都吸引了现场来宾的驻足观看和交流讨论。全产品矩阵展区展示了飞凌嵌入式丰富的产品线,从嵌入式板卡到工控机,从进口芯片平台到全国产平台,无不体现出飞凌嵌入式在嵌入式主控设备研发设计方面的
    飞凌嵌入式 2025-04-28 14:43 149浏览
  • 一、智能家居的痛点与创新机遇随着城市化进程加速,现代家庭正面临两大核心挑战:情感陪伴缺失:超60%的双职工家庭存在“亲子陪伴真空期”,儿童独自居家场景增加;操作复杂度攀升:智能设备功能迭代导致用户学习成本陡增,超40%用户因操作困难放弃高阶功能。而WTR096-16S录音语音芯片方案,通过“语音交互+智能录音”双核驱动,不仅解决设备易用性问题,更构建起家庭成员间的全天候情感纽带。二、WTR096-16S方案的核心技术突破1. 高保真语音交互系统动态情绪语音库:支持8种语气模板(温柔提醒/紧急告警
    广州唯创电子 2025-04-28 09:24 187浏览
  • 网约车,真的“饱和”了?近日,网约车市场的 “饱和” 话题再度引发热议。多地陆续发布网约车风险预警,提醒从业者谨慎入局,这背后究竟隐藏着怎样的市场现状呢?从数据来看,网约车市场的“过剩”现象已愈发明显。以东莞为例,截至2024年12月底,全市网约车数量超过5.77万辆,考取网约车驾驶员证的人数更是超过13.48万人。随着司机数量的不断攀升,订单量却未能同步增长,导致单车日均接单量和营收双双下降。2024年下半年,东莞网约出租车单车日均订单量约10.5单,而单车日均营收也不容乐
    用户1742991715177 2025-04-29 18:28 208浏览
  •  探针台的维护直接影响其测试精度与使用寿命,需结合日常清洁、环境控制、定期校准等多维度操作,具体方法如下:一、日常清洁与保养1.‌表面清洁‌l 使用无尘布或软布擦拭探针台表面,避免残留清洁剂或硬物划伤精密部件。l 探针头清洁需用非腐蚀性溶剂(如异丙醇)擦拭,检查是否弯曲或损坏。2.‌光部件维护‌l 镜头、观察窗等光学部件用镜头纸蘸取wu水jiu精从中心向外轻擦,操作时远离火源并保持通风。3.‌内部防尘‌l 使用后及时吹扫灰尘,防止污染物进入机械滑
    锦正茂科技 2025-04-28 11:45 108浏览
  • 贞光科技代理品牌紫光国芯的车规级LPDDR4内存正成为智能驾驶舱的核心选择。在汽车电子国产化浪潮中,其产品以宽温域稳定工作能力、优异电磁兼容性和超长使用寿命赢得市场认可。紫光国芯不仅确保供应链安全可控,还提供专业本地技术支持。面向未来,紫光国芯正研发LPDDR5车规级产品,将以更高带宽、更低功耗支持汽车智能化发展。随着智能网联汽车的迅猛发展,智能驾驶舱作为人机交互的核心载体,对处理器和存储器的性能与可靠性提出了更高要求。在汽车电子国产化浪潮中,贞光科技代理品牌紫光国芯的车规级LPDDR4内存凭借
    贞光科技 2025-04-28 16:52 236浏览
  • 你是不是也有在公共场合被偷看手机或笔电的经验呢?科技时代下,不少现代人的各式机密数据都在手机、平板或是笔电等可携式的3C产品上处理,若是经常性地需要在公共场合使用,不管是工作上的机密文件,或是重要的个人信息等,民众都有防窃防盗意识,为了避免他人窥探内容,都会选择使用「防窥保护贴片」,以防止数据外泄。现今市面上「防窥保护贴」、「防窥片」、「屏幕防窥膜」等产品就是这种目的下产物 (以下简称防窥片)!防窥片功能与常见问题解析首先,防窥片最主要的功能就是用来防止他人窥视屏幕上的隐私信息,它是利用百叶窗的
    百佳泰测试实验室 2025-04-30 13:28 256浏览
我要评论
0
0
点击右上角,分享到朋友圈 我知道啦
请使用浏览器分享功能 我知道啦