如何在带有 Coral 的 Raspberry Pi 上使用带有 2 个类的自定义 TFlite 模型?

How to use a custom TF.lite model with 2 classes on a Rasperry Pi with a Coral?

两天前,我在 Tflite 中根据图像数据集创建了一个自定义模型。准确率为 97.4 %,它只有 2 个类(人、花)

我将模型转换为在我的 Rasberry Pi 中使用 TPU Google Coral。

目前,我遇到了一些问题。 Google Coral 的文档并不适合我。

语言:Python3

图书馆

  • 喀拉斯
  • 张量流
  • 枕头
  • 皮卡梅拉
  • 麻木的
  • EdgeTPU-引擎

项目树:

-------->模型(子文件夹)

----------->model.tflite

----------->labels.txt

-------->video_detection.py

这是 Python 代码:(实际上代码来自文档)

import argparse

import io

import time

import numpy as np

import picamera

import edgetpu.classification.engine

def main():

  parser = argparse.ArgumentParser()

  parser.add_argument(

   '--model', help='File path of Tflite model.', required=True)

  parser.add_argument(

   '--label', help='File path of label file.', required=True)

  args = parser.parse_args()

  with open(args.label, 'r', encoding="utf-8") as f:

    pairs = (l.strip().split(maxsplit=2) for l in f.readlines())

    labels = dict((int(k), v) for k, v in pairs)

  engine = edgetpu.classification.engine.ClassificationEngine(args.model)

  with picamera.PiCamera() as camera:

    camera.resolution = (640, 480)

    camera.framerate = 30

    _, width, height, channels = engine.get_input_tensor_shape()

    camera.start_preview()

    try:

      stream = io.BytesIO()

      for foo in camera.capture_continuous(stream,

                        format='rgb',

                        use_video_port=True,

                        resize=(width, height)):

        stream.truncate()

        stream.seek(0)

        input = np.frombuffer(stream.getvalue(), dtype=np.uint8)

        start_ms = time.time()

        results = engine.ClassifyWithInputTensor(input, top_k=1)

        elapsed_ms = time.time() - start_ms

        if results:

          camera.annotate_text ="%s %.2f\

%.2fms" % (

            labels[results[0][0]], results[0][1], elapsed_ms*1000.0)

    finally:

      camera.stop_preview()

if __name__ == '__main__':

  main()
`Traceback (most recent call last):

 File"video_detection.py", line 41, in <module>

  main()

 File"video_detection.py", line 16, in main

  labels = dict((int(k), v) for k, v in pairs)

 File"video_detection.py", line 16, in <genexpr>

  labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`

如何运行脚本

python3 video_detection.py --model model/model.tflite --label model/labels.txt

错误

import argparse

import io

import time

import numpy as np

import picamera

import edgetpu.classification.engine

def main():

  parser = argparse.ArgumentParser()

  parser.add_argument(

   '--model', help='File path of Tflite model.', required=True)

  parser.add_argument(

   '--label', help='File path of label file.', required=True)

  args = parser.parse_args()

  with open(args.label, 'r', encoding="utf-8") as f:

    pairs = (l.strip().split(maxsplit=2) for l in f.readlines())

    labels = dict((int(k), v) for k, v in pairs)

  engine = edgetpu.classification.engine.ClassificationEngine(args.model)

  with picamera.PiCamera() as camera:

    camera.resolution = (640, 480)

    camera.framerate = 30

    _, width, height, channels = engine.get_input_tensor_shape()

    camera.start_preview()

    try:

      stream = io.BytesIO()

      for foo in camera.capture_continuous(stream,

                        format='rgb',

                        use_video_port=True,

                        resize=(width, height)):

        stream.truncate()

        stream.seek(0)

        input = np.frombuffer(stream.getvalue(), dtype=np.uint8)

        start_ms = time.time()

        results = engine.ClassifyWithInputTensor(input, top_k=1)

        elapsed_ms = time.time() - start_ms

        if results:

          camera.annotate_text ="%s %.2f\

%.2fms" % (

            labels[results[0][0]], results[0][1], elapsed_ms*1000.0)

    finally:

      camera.stop_preview()

if __name__ == '__main__':

  main()
`Traceback (most recent call last):

 File"video_detection.py", line 41, in <module>

  main()

 File"video_detection.py", line 16, in main

  labels = dict((int(k), v) for k, v in pairs)

 File"video_detection.py", line 16, in <genexpr>

  labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`

对我来说,现在很难集成自定义模型并将其与珊瑚一起使用。

文档:

  • https://coral.withgoogle.com/docs/edgetpu/models-intro/

  • https://coral.withgoogle.com/docs/edgetpu/api-intro/

  • https://coral.withgoogle.com/docs/edgetpu/tflite-python/

  • https://coral.googlesource.com/edgetpu/ /refs/heads/release-chef/edgetpu/

感谢阅读,问候

E.


错误在labels.txt文件中:

import argparse

import io

import time

import numpy as np

import picamera

import edgetpu.classification.engine

def main():

  parser = argparse.ArgumentParser()

  parser.add_argument(

   '--model', help='File path of Tflite model.', required=True)

  parser.add_argument(

   '--label', help='File path of label file.', required=True)

  args = parser.parse_args()

  with open(args.label, 'r', encoding="utf-8") as f:

    pairs = (l.strip().split(maxsplit=2) for l in f.readlines())

    labels = dict((int(k), v) for k, v in pairs)

  engine = edgetpu.classification.engine.ClassificationEngine(args.model)

  with picamera.PiCamera() as camera:

    camera.resolution = (640, 480)

    camera.framerate = 30

    _, width, height, channels = engine.get_input_tensor_shape()

    camera.start_preview()

    try:

      stream = io.BytesIO()

      for foo in camera.capture_continuous(stream,

                        format='rgb',

                        use_video_port=True,

                        resize=(width, height)):

        stream.truncate()

        stream.seek(0)

        input = np.frombuffer(stream.getvalue(), dtype=np.uint8)

        start_ms = time.time()

        results = engine.ClassifyWithInputTensor(input, top_k=1)

        elapsed_ms = time.time() - start_ms

        if results:

          camera.annotate_text ="%s %.2f\

%.2fms" % (

            labels[results[0][0]], results[0][1], elapsed_ms*1000.0)

    finally:

      camera.stop_preview()

if __name__ == '__main__':

  main()
`Traceback (most recent call last):

 File"video_detection.py", line 41, in <module>

  main()

 File"video_detection.py", line 16, in main

  labels = dict((int(k), v) for k, v in pairs)

 File"video_detection.py", line 16, in <genexpr>

  labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`labels = dict((int(k), v) for k, v in pairs)

ValueError: not enough values to unpack (expected 2, got 1)`

看起来你有一些行只有一个值而不是两个


相关推荐

  • Spring部署设置openshift

    Springdeploymentsettingsopenshift我有一个问题让我抓狂了三天。我根据OpenShift帐户上的教程部署了spring-eap6-quickstart代码。我已配置调试选项,并且已将Eclipse工作区与OpehShift服务器同步-服务器上的一切工作正常,但在Eclipse中出现无法消除的错误。我有这个错误:cvc-complex-type.2.4.a:Invali…
    2025-04-161
  • 检查Java中正则表达式中模式的第n次出现

    CheckfornthoccurrenceofpatterninregularexpressioninJava本问题已经有最佳答案,请猛点这里访问。我想使用Java正则表达式检查输入字符串中特定模式的第n次出现。你能建议怎么做吗?这应该可以工作:MatchResultfindNthOccurance(intn,Patternp,CharSequencesrc){Matcherm=p.matcher…
    2025-04-161
  • 如何让 JTable 停留在已编辑的单元格上

    HowtohaveJTablestayingontheeditedcell如果有人编辑JTable的单元格内容并按Enter,则内容会被修改并且表格选择会移动到下一行。是否可以禁止JTable在单元格编辑后转到下一行?原因是我的程序使用ListSelectionListener在单元格选择上同步了其他一些小部件,并且我不想在编辑当前单元格后选择下一行。Enter的默认绑定是名为selectNext…
    2025-04-161
  • Weblogic 12c 部署

    Weblogic12cdeploy我正在尝试将我的应用程序从Tomcat迁移到Weblogic12.2.1.3.0。我能够毫无错误地部署应用程序,但我遇到了与持久性提供程序相关的运行时错误。这是堆栈跟踪:javax.validation.ValidationException:CalltoTraversableResolver.isReachable()threwanexceptionatorg.…
    2025-04-161
  • Resteasy Content-Type 默认值

    ResteasyContent-Typedefaults我正在使用Resteasy编写一个可以返回JSON和XML的应用程序,但可以选择默认为XML。这是我的方法:@GET@Path("/content")@Produces({MediaType.APPLICATION_XML,MediaType.APPLICATION_JSON})publicStringcontentListRequestXm…
    2025-04-161
  • 代码不会停止运行,在 Java 中

    thecodedoesn'tstoprunning,inJava我正在用Java解决项目Euler中的问题10,即"Thesumoftheprimesbelow10is2+3+5+7=17.Findthesumofalltheprimesbelowtwomillion."我的代码是packageprojecteuler_1;importjava.math.BigInteger;importjava…
    2025-04-161
  • Out of memory java heap space

    Outofmemoryjavaheapspace我正在尝试将大量文件从服务器发送到多个客户端。当我尝试发送大小为700mb的文件时,它显示了"OutOfMemoryjavaheapspace"错误。我正在使用Netbeans7.1.2版本。我还在属性中尝试了VMoption。但仍然发生同样的错误。我认为阅读整个文件存在一些问题。下面的代码最多可用于300mb。请给我一些建议。提前致谢publicc…
    2025-04-161
  • Log4j 记录到共享日志文件

    Log4jLoggingtoaSharedLogFile有没有办法将log4j日志记录事件写入也被其他应用程序写入的日志文件。其他应用程序可以是非Java应用程序。有什么缺点?锁定问题?格式化?Log4j有一个SocketAppender,它将向服务发送事件,您可以自己实现或使用与Log4j捆绑的简单实现。它还支持syslogd和Windows事件日志,这对于尝试将日志输出与来自非Java应用程序…
    2025-04-161