Seeed Studio at Smart City Expo 2024: Empowering the Future of Urban AI Hardware Solutions
Join Seeed Studio at Smart City Expo World Congress (SCEWC) 2024 in Barcelona! As urban areas worldwide continue to evolve, the demand for smarter, more connected solutions is driving advancements in AI hardware and edge AIoT.
From November 5-7 at Gran Via, Hall 2, Booth F71, we’re excited to showcase our latest lineup of sensing, networking, and edge AI devices, designed to transform cities through real-time data, seamless connectivity, and high-speed AI processing at the edge.
1. AI Sensors: Multimodal Sensing for a Smarter Environment
A smart city depends on the ability to sense, monitor, and respond to its surroundings. Our AI sensors offer versatile environmental monitoring, long-range connectivity, and advanced scene detection. With over 100 open-source, pre-trained models, these sensors bring multimodal sensing to urban spaces, capturing data from air quality to traffic flow in real time to support an adaptable and responsive city environment.
Stay Connected in Remote Areas
Offering 4G, 5G, or WiFi connection even in remote areas, SenseCAP Card Tracker T1000-E is the world’s first IP65-rated Meshtastic device. Compact and card-sized, it easily fits in your pocket or attaches to assets, operating seamlessly on the Meshtastic LoRa mesh network. It offers high-precision, low-power positioning and communication. Additionally, the T1000-E features onboard sensors to provide temperature and light data, making it ideal for remote connectivity in unpredictable environments.
2. AI Gateways for Seamless Connectivity and Control
Cities are powered by data, and our AI IoT gateways function as hubs for data collection and connectivity. These gateways provide versatile connectivity options and advanced AI capabilities, collecting, processing, and distributing data across urban areas. By facilitating real-time adjustments and insights, our AI gateways help cities operate more efficiently and respond instantly to changing conditions.
3. Generative AI at the Edge with reThings Devices Powered by NVIDIA Jetson Orin
With reThings, powered by NVIDIA Jetson Orin modules, you can now deploy generative AI directly to the edge. Supporting advanced generative AI models such as Llama, LLaVA, and VLA, these devices push the boundaries of robotics intelligence and multimodal interactions. The deployment of generative AI at the edge enables cities to integrate more interactive and autonomous systems, streamlining everything from traffic management to public safety.
Develop Generative AI-Powered Visual AI Agents for the Edge
Powered by NVIDIA Jetson Orin modules, which deliver up to 275 TOPS AI performance for edge AI and robotics, visual AI agents can help unlock new possibilities for real-time video analytics in urban settings. These agents can analyze both recorded and live video, answer questions in natural language, summarize scenes, trigger alerts, and extract actionable insights. They support a wide range of applications, from object detection to security monitoring, making it easier to maintain safety and efficiency across city landscapes.
Jetson Platform Services (JPS) provides an AI service that allows optimized visual language models (VLMs) to run locally on the NVIDIA Jetson platform. This AI service can be combined with other components of JPS to build a mobile app integrated alert system. This system is capable of monitoring live streams for user-defined events and sending push notifications when the VLM detects the events.
Combined Vision AI and Generative AI with reCamera and Jetson Orin.
reCamera, our newly released vision AI platform, delivers next-gen video intelligence when paired with Jetson Orin. This configuration, running NVIDIA Metropolis software development kits for vision AI applications and VLMs, provides building blocks to quickly deploy fully functional systems for vision and other edge AI applications. Using the “TinyML + LLM” architecture demonstrated at NVIDIA GTC earlier this year, reCamera captures critical frames while Jetson Orin extracts insights through a VLM. For instance, if a user asks, “How many objects are in front of me on the right?”, the system will automatically adjust the camera with a gimbal, capture an image, and respond via speaker: “There are 3 objects in front of you on the right.” This seamless integration of vision and generative AI provides real-time, context-aware insights.
Join Us at Smart City Expo 2024
Explore how Seeed Studio’s AI hardware and edge AIoT solutions can transform urban spaces at Smart City Expo 2024. From multimodal sensing to generative AI at the edge, Seeed is here to empower the cities of tomorrow. Stop by Booth F71 in Hall 2 to see our demonstrations, connect with our team, and discover how our hardware solutions can help create a more connected, efficient, and responsive urban landscape.
📅 Date: November 5-7, 2024
📍 Location: Gran Via, Hall 2, Booth F71
We look forward to connecting with you at SCEWC 2024 and advancing the future of smart cities!
Thank you for your kind words! I’m glad to hear that you found the post impactful. I appreciate your support and look forward to sharing more insightful content in the future! eggy car
What measures are in place to ensure the ethical use of AI and IoT technologies in urban settings, including data privacy, Incredibox Colorbox Mustard security, and potential biases in AI algorithms?