Humans are excellent at aggregating information as they visually explore 3D
spaces. Understanding how humans do this is essential for building robust, mobile,
intelligent systems such as a robotic agent. While we have recently gained some
insight as to how humans aggregate information while scanning a 2D image, this
is a largely superficial understanding and explicit work is needed to understand
how humans visually navigate 3D spaces. To do this, we propose a novel model
for predicting human fixation sequences, using data collected from a virtual reality
head set and pupil trackers. The model is designed around many of the challenges
presented by predicting a non-deterministic sequence like humans saccades. In
order to do this in the most controlled and interpretable way, we built our own
data set of virtual reality videos with pupil tracking measurements. We hope that
this model will lead to increased understanding of the aggregation of human 3D
information, as and ultimately the development of more robust robotic agents.
6.861 Science of Intelligence
Instructors:
Poggio, Ullman, Winston, Boix
Authors:
Diego Pinochet
Julian Alverio
Robotic ceramic printing in adaptive flexible molds.
Project develope3d at the SCI material systems class taught by Nathan King and the ceramics lab at Harvard.
go to the FIND OUT MORE for further information.
6317 SCI - Material Systems
Instructor:Nathan King Authors: Molly MasonAlejandro Gracia Diego Pinochet
Project developed for the class Intelligent Robot Manipulation at course 6. The goal was to do real-time object detection from point cloud data. Integrating point cloud data from RGBD cameras (intel real sense) and Machine learning models for object segmentation (YOLACT), the project’s main goal was to detect and segment objects from the camera’s live feed. By segmenting point clouds with the detected items, we can obtain relevant information for robotic manipulation such as the object’s rigid transform, bounding box (for grasping), and dimensions. With that information , the system can be integrated into closed loop robotic systems for intelligent manipulation calculated in real-time.
If the implementation of on site robotics is the aim of the construction industry to build better and smarter, current robotics implementations must aim for the implementation of real time perception that could enable close loop systems for improved on site construction. Can we improve object detection and point cloud segmentation using recent algorithms for fast object recognition? Recent algorithms for fast object recognition and segmentation can help to improve better point cloud segmentation due to their speed and embedded capacity to generate masks that eliminate intermediate steps in cleaning point clouds. We propose the generation of a new pipeline for real time point cloud segmentation using YOLACT[1] as an alternative to FasterRCNN [2] for object detection. The main goal of our work is to generate a pipeline for real time object detection and pose detection from RGB-D data.
6.881 - Interactive Robotic Manipulation
Authors:
Diego Pinochet
Lukas Lesina Debiasi
Computer Numerical Controlled (CNC) machines are everywhere. Since the invention of modern CNC mills at MIT in 1952, engineers motorized different ‘End Effectors” and invented CNC laser cutting,welding, plasma cutting, bending, spinning, punching, gluing, sewing, tape and fiber placement,routing, picking and placing. Architecture and Design students operate laser cutters with ease every day, yet very few know how CNC technology works or how they could expand their ideas about the creative use of Robotics. Today's architectural practice is rapidly becoming a field for experimentation not only for the creation of innovative buildings but also for the creation of innovative tools to design and build our creations. This workshop will be divided in two parts. First it will present the mechanical and electrical theories about every component in a typical 2 axis CNC machine (i.e: a laser cutter) and later it will focus in the methods to generate/program CNC machines expanding their typical use towards more creative/innovative applications in architectural design. Participants (either alone or in groups of two) will then build their own XY axis CNC pen plotter in a hands-on session led by tutors from HKU (Victor Leung) and MIT (Diego Pinochet). Students will also program their machines to make drawings that will be exhibited. Students are encouraged to come up with ideas to modify the machine for other novel applications, such as CNC milling, rotary axis, Interactive
Workshop at Hong Kong University
Instructors:
Diego Pinochet
Victor Leung
Computational design and fabrication methods have been challenged over the past years in relation to
their capacity of embracing designer's personal style and autographic footprint. In addition, computational
design and fabrication have been critically questioned in relation to how through the use of numerically
controlled machines -not only at a formal but also material level- imply a specic way of thinking and
making that seeks for perfect outcomes. Furthermore, material computation and a new found importance
in materiality centers the discussion in terms of how digital fabrication through its inherent fragmented
nature -in terms of transition from idea to object- can incorporate the uniqueness of designers’ intentions
and everchanging enviroment conditions.
How can we compute in realtime the outcome of a process which is typically a by product of a set
translations and intermediate representations from idea to physical object?
How numerically control machines can be reformulated as devices for material computation in a more
performative alchemic way instead of a predened template?
The main purpose of this research was to develop an alchemic system for material computation through the
use of personal fabrication machines. I proposed a method of alchemic computation for additive
manufacturing in which the designer can compute and mix in realtime dierent components to build
physical objects with multiple material properties. The main goal is to generate a device and methodology
to express designer's intentions both as a formal and material level.
The independent study builds upon previous research about interactive personal fabrication and
interactive machines that aimed to generate a system that would allow designers to imprint their own style
and ways of making into unique objects embracing the fruitfulness of impresicions and indelities of
material alchemy.
The system is composed by a three axis 3D printer connected to a variable material mixing device controlled
in realtime by the designer using gestures. By the use of 2 numerically controlled syringes, the system allows
the emergence of unique material congurations by the combination of 2 or more materials in realtime. The
machine was designed from scratch as a modular system that can add more more materials to the mix as an
alchemic way of computing the physical emergence of the produced objects.
The rst part of the project was developed during the spring 2019 semester and consisted in hardware
design and fabrication using 3d printer components and standard mechanical parts. The system is
controlled by a DUET 3D board capable of controlling both extruders as well as the 3 motion axes of the
machine.
The second part of the research will be developed during the fall 2019 semester and consists in material
tests, controller and sensors fabrication and interface programming.
Independent study
author: Diego Pinochet
supervisor: Terry Knight
Description:
I developed a system for using hand tracking systems to calculate Inverse Kinematics in real time using gradient Descent. I used C# and Unity as platform developments and UDP protocols to connect Unity and grasshopper to send the information to Kuka PRC ( a system for KRL code generation).
4.180 Introduction to robotic fabrication
Instructor : Zain Karsain
Spring 2020
Nowadays, with the advent of new techniques in deep learning, considerable research is being conducted in the area of Computer Science, focusing on the improvement of computer vision systems for pattern recognition or learning systems as a way to understand and model state of the art AI. As an example, current research implies the work on different areas that go from the development of different convolutional neural networks (for image classification or prediction), Generative Adversarial Neural Networks (for the creation of content), to the generation of ‘policies’ and learning algorithms (by which agents learn tasks in controlled environments). Nevertheless, considering that design is neither pure pattern recognition nor pure classification -and if vision remains a complex topic for computer science-, the concept of a learning agent in a simulated environment opens a door for the development of alternative approaches to enable AI and design. Departing from the oculocentric design approach to AI, the integration of additional parameters such as physics simulations, touch, gestures, temporality, and so on, the possibilities for interaction with digital systems grow exponentially. How do computational models can be developed to understand alternative methodologies enhancing the antithetical qualities of humans and machines in a simulated complex digital environment? In recent years, the goal of many research projects related to AI and design start from questions such as How can machines think creatively? This question builds and seeks similar goals of early AI and CAD proponents from the 1960s concerning the generation of an intelligence that would help designers through augmentation by automation. Nevertheless, if we attend to what Carr (2014) argues, ‘… when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means,’ we can enter into the discussion about concepts related to meaning, originality, and autographic work that are essential in the design enterprise. In this brief workshop we will use state of the art techniques to generate methodologies for interaction with a ‘virtual machine’ to propose alternative methods for designing, building, or even with agents toward a specific goal.
Workshop Schedule:
About the Instructor:
Diego Pinochet is a PhD student at the design and Computation group at MIT, researcher at the Encoded elements lab at the International Design Center at MIT, and a professor at school of Design at UAI Chile. His research is focused in advanced computational design and interactive fabrication methodologies, Artificial Intelligence, Robotic fabrication, Building Information Modelling BIM, and Interactive Applications for creative purposes.
Language: English
Workshop Start time: 2:00PM GMT. Please check your local time for compatibility with this workshop’s schedule.
Schedule: Workshop number of Days: 5 / Hours Per Day: 3
Number of Students: Active Participating Students: 12 / Auditing Students: 0
Robotic ceramic printing in adaptive flexible molds.
Project develope3d at the SCI material systems class taught by Nathan King and the ceramics lab at Harvard.
go to the FIND OUT MORE for further information.
6317 SCI - Material Systems
Instructor:Nathan King Authors: Molly MasonAlejandro Gracia Diego Pinochet
Design is "something that we do" that is related to our unique human condition as creative individuals, so as "making" is related to how we manifest and impress that uniqueness into our surrounding environment. Nonetheless, the use of technology in architectural design, by being focused mainly on the representation -both digital and physical - of a pre-determined idea, has neglected using digital tools in a more exploratory way by integrating body and senses in the design processes. As physical modeling, gestures, and tools are mechanisms by which designers learn and think, I assert that creativity emerges in the very moment of impression of the self onto the material world as an improvised choreography between humans and objects -materials and tools- by using body gestures neither as action nor perception, but as the unity of both. If we are to extend our creativity and enhance the design experience through the use of digital tools, we need to reformulate the way we interact with computers and fabrication machines, by developing new models and strategies focused on the integration between both. In this thesis, I propose an alternative way for designers to use digital tools, transcending from a model of 'operation' to a model of 'interaction'. My hypothesis is that real-time interaction between designers and fabrication machines can augment our creativity and cognition engaging exploration, speculation and improvisation of designs through the use of gestures and interactive computation. I propose a model of interaction that seeks to transcend the 'hylomorphic' model imperative in today's architectural design practice to a more reciprocal form of computational making. To do so, I propose the Making Gestures project, which explores real-time interaction between mind, body, and tools by using body gestures and imbuing fabrication machines with behavior in order to establish a dialog, which embraces ambiguity and the unexpected to engage the designer into insightful design processes.
Thesis: S.M., Massachusetts Institute of Technology, Department of Architecture, 2015. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 106-107).
2015
Making Gestures: Design and fabrication through real time human computer interaction
Massachusetts Institute of Technology. Department of Architecture.
We developed a system to generate 3D voxel-based chairs based on sketch input. Our work relied on recent progress in 3D Generative Adversarial Networks (GANs). 3D GANs are conceptually similar to the image GANs, which are capable of generating, among other things, realistic fakes of celebrites (Karras et al., 2018), but they generate shapes instead.
With Renaud Danhaive
Whereas image GANs generate 2D arrays/matrices, 3D GANs generate 3D arrays/matrices. The shape is described by density values (between 0 and 1), which can be interpreted as a material density values, 0 and 1 corresponding respectively to void and solid regions. We use the 3D GAN model for chairs developed by Wu et al. (2016), whose architecture is shown below, to generate a large number of chairs by sampling the latent vector. Each 3D model is rendered as an image, and each image is converted into a sketch. With paired sketch/latent vector data, we built a model mapping a processed sketch onto a latent vector, which in turn is fed into the 3D GAN to generate a chair.
6.s189 Deep Learning Practicum
Instructors:
Hal Abelson, hal@mit.edu, Natalie Lao, natalie@mit.edu
Authors:
Diego Pinochet
Renaud Danhaive
This research effort seeks to understand and augment human communication and interaction through lighting in shared and public spaces. For almost a century now, light has become the primary interface for information transmission and communication; yet, input/output devices have remained in large part limited to private and single user scenarios, failing to engender novel social and collective human experiences. Moreover, public lighting has failed to be used for the complex kinds of communications and information displays that we typically encounter on a desktop computer or mobile phone, being for the most part relegated to acting as a backdrop to our social activities.
By focusing on the use of shared and public lighting, this research seeks to develop new technologies and interfaces that operate at architectural and city scales, such as bridges, building façades, stadiums, etc, and that will advance how we experience and use light as tool for creativity, communication, learning
This ‘impedance’ mismatch between creative tool and creation output has led lighting and product designers to continuously cobble together their own toolchains, which struggle to (1) take full advantage of the physical topology and unique properties of 1D displays; (2) re-use existing visual content; and (3) portray rich and symbolic content; fundamentally failing to create a common language and engender collaboration between designers.
Approach
To address this problem, we propose to continue to the second stage of Interaction with Purpose by researching and developing interfaces, tools and techniques for the creation of content for 1D displays that can be used by both amateur and professional creatives and that can support ease of entry, creative latitude, and ‘high expressive ceiling’ when creating light-based interactions and experiences.
We will focus the research on two overlapping areas:
(1) Hardware and Software Interfaces
On the graphical interface side, we will investigate the design of tools and techniques for single and aggregate direct pixel manipulation, seeking to identify the appropriate interface metaphors and affordances that are useful to both amateurs and professional creatives. On the hardware side, we will look at interaction techniques that are based on phone capabilities, such as camera, accelerometer, light sensors, etc including new modalities such as Apple’s Ultra Wideband Spatial Awareness. This focus will make these interactions accessible to a broad range of users while also engendering group or crowd-scale level interactions (since users will share similarly ubiquitous technology stacks) that are in-situ. Specific research topics might include:
· Survey of existing tools and affordances for content mapping (D3, After Effects, Madmapper, etc)
· Tools for single axis pixel manipulation, area fill, gradients, etc.
· Copy and paste of pattern vs. hue, saturation, brightness information
· Touch vs gestural input for pixel manipulation
· Spatial awareness and directionality for user differentiation
(2) Generative and Adaptive Algorithms
To support the interface, we will research image processing, computer vision, and machine learning techniques that can allow existing image and video content to be analyzed, annotated, downsampled, and re-generated onto a 1D display, while preserving meaningful stylistic and symbolic characteristics. This form of ‘semantic spatial compression’ can help novice users create complex design and behaviors with minimal user input by leveraging existing content and tools for photo and video creation. Specific research topics and techniques might include:
● Extraction and re-application of optical flow
● High vs. low spatial filtering
● Anti-aliasing and posterization in high dot pitch luminaires
● Single-axis dithering
● Low-resolution style training and transferring
● Foreground/background re-mapping to wall washes, floods, and spot luminaires
● Minimal input modality in generative algorithm
● Color mixing in indirect, reflected lighting
Encoded Elements Lab - IDC MIT
Authors:
MIT
Marcelo Coelho
Diego Pinochet
Signify
Rohit Kumar
What is a 1D App Platform?
Is a plattform that consists of three parts
INPUT
● Web or app based (phones, tablets, watches, etc)
● No direct user attention needed (‘look at the building, not your phone’)
● No need for additional hardware from users
LOGIC
● Composed of Application Logic + User Management + Luminaire Control
● Scalable from single user to multi-user, and from single site to a city
● Software infrastructure can be distributed across devices or centralized
OUTPUT
● Low-resolution display (few luminaires or single rows of light)
● Complex behavior where complex graphics are not possible
● Apps can work across different sites w/ minimal configuration
This project is an implementation for enabling Crowd Scale Interactions at an urban scale. By the implementation of multiplayer games, users can connect to City Landmarks and interact with people in different parts of the city.
Signify + Encoded Materials Lab
Team:
Marcelo Coelho
Diego Pinochet
Maroula Bacharidou
Lukas Debiassi
Here you can find other projects related to Architecture, digital fabrication , education , app development and computational design in general
Copyright © 2022 Diego Pinochet Design - All Rights Reserved.
This website uses cookies. By continuing to use this site, you accept our use of cookies.