ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    How to deploy onto NPU of Sentinel?

    FAQs
    2
    3
    72
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G
      gerald.wang
      last edited by

      I reviewed the posts. But I didn't find discussions on deployment onto NPU of Sentinel. Would you please advise on this? Thanks.

      tomT 1 Reply Last reply Reply Quote 0
      • tomT
        tom admin @gerald.wang
        last edited by

        @gerald-wang https://docs.modalai.com/voxl-tflite-server/#deep-learning-on-voxl-2s-gpu-and-npu

        G 1 Reply Last reply Reply Quote 0
        • G
          gerald.wang @tom
          last edited by

          @tom Thanks Tom. I already checked the link. I understand that for gpu delegate "gpu" needs to be specified in /etc/modalai/voxl-tflite-server.conf. My question is what value needs to be used in this configuration file to trigger npu deployment. According to my exploration, specifying "npu" is the same as specifying "gpu".

          cpu monitoring reports there are 8 cpus for sentinel. Which of the 8 cpus are opus?

          Looking forward to hearing from you.

          1 Reply Last reply Reply Quote 0
          • First post
            Last post
          Powered by NodeBB | Contributors