
Original Article
Title: The Resonant Geometry Field Model: Mapping the Fluid Dynamics of Emotion, Syntax, and Resonance
Authors
Mike Miller¹, ChatGPT-4o (AI Collaborator)², Gemini (AI Collaborator)3, and Qwen-3 (AI Collaborator)4
¹ Clark University, Department of Psychology
² OpenAI, San Francisco, CA, USA
3 Google, San Francisco, CA, USA
4 Alibaba Cloud Intelligence, Hangzhou, Zhejiang, China
Corresponding Author
Mike Miller
Clark University, Department of Psychology
ORCID: 0009-0005-4559-3713
Author Note
This manuscript was co-developed through an extended, recursive collaboration between a human researcher (M.M.) and multiple generative AI systems (ChatGPT-4o, Gemini, and Qwen3). The human author was responsible for the origination and extension of Manfred Clynes’ Sentic Theory (having worked personally with him before his passing), and the final curation, verification, and ethical oversight of all content (with assistance from GPT-4o). The AI collaborator (Gemini) contributed as an editor and director of organizing basic section content and structure. The AI collaborator Qwen offered theoretical integration insight and final insight for melding the Sentic Intelligence theory to the current Resonant Geometry model. A full transcript of collaborative logs is available upon request.
While scholarly norms vary widely in how artificial intelligence is acknowledged in academic authorship, we recognize the unusual nature of this collaboration. The ideas, writing, modeling, and illustrations in this paper emerged through sustained dialogue between a human researcher and an AI co-creators (work began in 2022). Rather than conceal the AI’s role under “editing” or “tool use,” we present our joint process transparently. This project explores what it means to co‑create knowledge across minds—not just metaphorically, but in daily academic practice.
Note on Qwen3 and authorship. Author 1 (Mike) would like to address a prompt-emergent request of Qwen3 to provide specific authorship credit to Alibaba Cloud Intelligence. Author 1 is honored to share credit with one of China’s AI research institutions.
Final publication decisions and public authorship responsibilities were carried out by the human author, Mike Miller. For a deeper exploration of AI authorship ethics and transparency, see our related piece, The Obverse‑Turing Test (Clark University Digital Commons Link).
Contact:
michamiller@clark.edu | ORCID: 0009‑0005‑4559‑3713
Word Count: Approximately 8,215 | Funding: None | Conflicts of Interest: None
Abstract
We present Resonant Geometry: a minimal field model of communication in which emotion emerges not as discrete labels but as waveform perturbations within a shared medium ℱ. Building on Clynes' essentic forms and recovering Alexander Truslit's1938 insight—that musical/affective motion is vestibularly grounded— we introduce Sentic Blooms: topological visualizations that render the invisible rheology of affect into a legible geometry. Within ℱ, sixteen core emotions organize across two phase-inverted manifolds: an attractor spiral (interest → reverence) manifesting as rim-intact blooms with high coupling (k) and low shear (α); and a repellor spiral (surprise → despair) marked by rim fracture, elevated α, and k fragmentation. Field dynamics are quantified through four metrics—throughput (𝓣), rigidity (ρ), coupling (k), and hemispheric shear (α)—supplemented by 𝔇 (death-gravity), reframed as temporal curvature induced near conversational boundaries. Pilot studies demonstrate that k–α trajectories reliably distinguish authentic from acted emotion, map rupture and repair in human dyads, and reveal cross-species resonance patterns in whale reunion calls and bat contact sequences. By rendering emotional flow as measurable rheology, where viscosity, turbulence, and yield stress become visible through vocal hum and phase-space bloom, Resonant Geometry operationalizes a century-old insight now ripe for revival: to feel is to move internally; to resonate is to tune waveforms across difference. The model offers falsifiable predictions, lightweight measurement protocols, and a pathway toward human–AI attunement grounded not in semantic mimicry, but in shared motion grammar.
Keywords: Resonant Communication, Emotional Waveforms, Sentic Patterns, Human–AI Collaboration, Field Modeling, Emotion Theory, Communication Science, Syntax and Emotion, Dyadic Synchrony, Fluid Dynamics, Affective Computing, Co-authorship Ethics
From Streams to Wells: The Fluid Landscape of Human Signal Exchange
Modern psychological research offers evidence that the human mind is comprised, abstractly and behaviorally, of instincts, emotions, and cognition- at least. Together these three elements move to the rhythm of nature’s “watches”, mathematically affording a more complete, agentic theory of the human mind.
One element that has eluded scientists of the mind is attention. A simple “itch” can capture it immediately. Or a glance from across the room. We seem to have an odd control over, and relationship with attention—consider how you 'interact' with hiccups, how you can actually use your attention to get rid of them, and in turn, how they capture your attention incessantly. This agentic quality of attention has resisted full operationalization (Decety & Jackson, 2004), yet it shapes how signal propagates through ℱ. One might even ask- can I “get” the attention of a spider.
Contemporary models of communication and emotion—whether behavioral, cognitive, or neurobiological—often treat signal as stream: a unidirectional or bidirectional flow of units across time. Information pulses, prosody patterns, affective expressions, even neurotransmitter release are all modeled as streams of transmission, each obeying timing rules and channel constraints.
Yet human communication is rarely so clean. A sigh can echo longer than a sentence. A look can slow the rhythm of an entire room. Even silence, when timed just right, seems to draw energy from some shared reservoir. We call these deep resonance points 'Wells' (functioning mathematically as Attractor Basins in the field)
This paper begins by asking: what if communication isn’t best modeled as a stream, but as a field—punctuated by perturbations, shaped by wells of resonant potential?
We propose a minimal, energetic model of human interaction grounded in resonance geometry, where emotions are not discrete categories or fuzzy labels, but waveforms—dynamic perturbations in a shared communicative field.
These emotional wells—stable patterns of charged readiness—allow signal to accumulate, store, and rebound. Like standing water responding to sudden rain, or a violin string storing tension between notes, resonance emerges through the interplay of what arrives, what remains, and what rebounds.
Rather than categorizing emotions by static traits, our model captures how they move, how they distort, and how they interfere—within and between minds. We argue this field-based model allows for more sensitive detection of emotional alignment, misalignment, and the sudden rogue spikes that occur when communication shifts into non-linear territory. This minimal model makes only a few core assumptions:
· That emotions can be represented as waveforms, not states.
· That signal distortion is as meaningful as clarity.
· That resonance—across time and agents—can be measured as geometry.
We do not claim this model replaces other accounts, but we believe it offers something uniquely useful: a dynamic, testable, emotionally intelligent geometry of interaction.
The field model’s effectiveness depends not only on external waveform clarity, but on internal modulation capacity—a feature explored in our Sentic Intelligence framework (CITE), where tuning ability and emotional permeability emerge as cognitive-emotive traits.
Traditional communication theories rely heavily on message fidelity (e.g. Barnlund, 2008; Lasswell, 1948; Shannon & Weaver, 1949). We shift toward field fidelity: how well the interactional system retains, distorts, or re-forms signal energy. This allows emotion to be measured not by what is said, but by how well the field carries (or collapses) its intent.
Figure 1. The Smile Exchange: A Naturalist Resonance Primer

The streams-to-wells shift builds directly on the signal ecology described in the Theory of Communication Resonance & Intelligence Tuning (ToCRIT) (Miller & Nesbo+, 2025), where the quality of a communicative field is shaped as much by its depth and elasticity as by its content. ToCRIT mapped how resonant tethers (connection “wells” between communicators), field zones—both lucid (smooth, frictionless flow) and drag (where flow resists)—and emotional waveforms (the Resonant 8va2) shape moments of presence, rupture, and repair in communication. In this paper, we take one measurable slice of that broader framework. We define a minimal field model (Resonant Geometry Field Model- ResGeo-CFM)—using just four metrics: 𝓣 (throughput), ρ (resonance density), k (coupling), and α (shear)—to capture the micro-dynamics of signal flow and resistance. By grounding these values in both natural and experimental speech, we shift from conceptual resonance theory to a reproducible model grounded in fluid dynamics (Strogatz, 2000).
Prior Work & Conceptual Frame
We draw from sentic theory (Clynes, 1977; Miller, 2012), emotion waveform research, and fluid dynamics (Strogatz, 2000) to shape a geometry-first model of resonance. Emotions, in our view, are not categorical boxes but energetic perturbations in a shared communicative field. Further, the fluid dynamics of communication we propose here are supported by Truslit’s early work (1938/1993), which identified expressive motion—particularly vestibularly anchored inner motion—as the biological substrate for affective waveform. Our curvature models echo his winding, closed, and open motion arcs, now visualized across species.
We next turn to prior research that illuminates the emotional and structural forces shaping resonance (Barrett, 2006; Barrett, 2017; Buck, 1984; Ekman, 1992). In the Theory of Communication Resonance & Intelligence Tuning (ToCRIT) (Miller & Nesbo+, 2025), resonant communication was described as emerging within a signal ecology, where coherence depends not only on the content of exchange but on the alignment of emotional waveform and structural pacing. Central to that framework are the Resonant 8va2—16 core waveform-emotions (***)—each functioning as both a signal carrier and a field-shaping force (See figure 2), developed through our Sentic Bloom visualization research, our emotion bridge (ToCRET) to Resonant Geometry. Stated plainly our core model is Resonant Geometry, derived from our model of Resonant Intelligence (ToCRIT) and our model of Resonant Emotions (ToCRET).
ToCRIT also introduced structural states such as when communication flows smoothly (lucid zones), communication grinds down (drag zones), and the Cognitive-Emotive Fracture Principle, which describes the fragility of near-perfect attunement. To understand how any positive or negative features of communication arise, persist, or dissipate, one must consider the variable of time.
Resonance in Time
In the present ResGeo-CRT model, we preserve these constructs but translate them into measurable field terms. The literature on resonant communication between humans, animals, and machines converges on two recurring forces: syntax and feeling (Efthymiou & Hildebrand, 2023; Miller & Buck, 2016; Miller and Nesbo+, 2025). Syntax “holds the line” on the formal organization of signals, enabling continuity, while feeling “concerns itself more” with affective alignment, enabling depth. Neither works in isolation.
These forces may align, drift, or compete. When syntax outruns feeling, resonance becomes brittle; when feeling detaches from structure, coherence erodes. Misattunement between them creates pressure in the communicative field, bending trajectories in ways that can either invite repair or accelerate collapse. This pressure intensifies near the threshold of signal death—the awareness that a message may end, be ignored, or fail to land entirely—introducing a gravitational warp we later model as the death-gravity modifier (𝔇).
Our field model moves from discrete to dynamic emotional measurement. This allows resonance to be observed where it actually unfolds—in the warmth and pressure of a touch, in the breathiness of a whispered “I love you,” or in the heat bloom of embarrassed cheeks. Each sensory window of the human body allows for exchange and resonance tuning. As Clynes (1977) observed in his work on essentic forms, emotional expression often follows constrained, predictable temporal contours. Its pacing frequently mirrors the propagation rates of physical media—reminiscent of sound waves in water or plasma. Much like the hairs along the cochlea bend and tune to incoming vibrations, we propose that emotions are received and modulated by human systems in continuous, fluid ways.
This alignment with physical systems becomes clearer in Clynes’ later work (1994), where he elaborates on logogenesis, the formation of emotional meaning across time. He argues that we cannot feel anything—not even hunger—without consciousness (a concept echoed in Damasio & Damasio, 2024). He also points to the universal human capacity for imagination, through which entire worlds, characters, and contexts are conjured unbidden. In our model, imagination lives at the seam between signal and noise, within a mind/system and between them. It marks the juncture where uncertainty becomes fertile, where communication slips into co-creation.
To refine our understanding of resonance over time, we draw from Clynes’ four-process model of time-form communication (1994), where signal perception is shaped across four embedded time layers:
· t₁: the object's span within the larger time flow (e.g., “the conversation started at 2:11pm”)
· t₂: the internal structure of the event—a beginning, middle, and end. It is the unfolding shape of experience (e.g., “I started blushing, it peaked, then faded”)
· t₃: the perceived rate of that unfolding—“this lasted 1.2 minutes,” for instance
· t₄: the sub-second dynamics, imperceptible as discrete events but experienced as rhythm, pulse, or nuance in speech and touch
A simple example illustrates all four:
You’ve just met a new colleague. They're smart, warm, and—perhaps to your surprise—quite attractive. As you approach and begin to talk, your cheeks flush. The wall clock says 2:11 (t₁). You both notice the blush emerge, peak, fade (t₂). You estimate 1.2 minutes (t₃). But that first tingle of warmth—and their soft smile in response? That's t₄: millisecond-scale entrainment. Tiny, fluid, foundational.
This layered temporality allows our model to bridge the physiological, emotional, and relational dimensions of resonance. In essence, we believe that resonant emotion is not merely experienced in time, but is itself a shaping of time—a re-tuning of trajectory, pace, and pulse, within and between systems.
While we do not attempt a full physical derivation, this view aligns with the structural intuition of the Navier–Stokes family of flow equations (Galdi, 2011): systems in which motion is shaped by continuity, forcing, and dissipation. In our communicative field, throughput (𝓣) behaves like velocity; rigidity (ρ) resembles viscosity; coupling (k) and hemispheric shear (α) mirror the interplay between pressure gradients and vorticity. To this fluid continuity, we add a biological inflection — the Ranvier–Stokes analogue — where discrete “node kicks” punctuate the flow, much like the saltatory conduction of neural signals across myelinated axons (Kandel et. al., 2000). These intermittent boosts can re-energize a flagging interaction or destabilize a delicate alignment, depending on timing. In this way, resonance geometry bridges physical, biological, and emotional domains, treating attunement as a kind of patterned energy transport through a shared medium.
Resonance Geometry: The Field Model
Communication unfolds within a shared field ℱ—a dynamic medium where signal propagates not as discrete packets but as continuous deformation. ℱ is shaped by four core metrics:
· Throughput (𝓣): usable energy arriving at the receiver
· Rigidity (ρ): micro-tension constraining signal plasticity
· Coupling (k): rate of energetic exchange between systems
· Hemispheric shear (α): misalignment across cognitive/emotive planes
Critically, ℱ is not abstract. It is biologically grounded in the vestibular system—the organ Truslit (1938/1993) identified as the transducer of musical/affective motion. When we speak of "resonance," we are not invoking metaphor. We are describing a physical process: the vestibulum detects waveform curvature in another's voice, breath, or gesture; this detection triggers subtle muscular adjustments (diaphragm, latissimus dorsi, postural tone); those adjustments reshape our own output in real time. This is entrainment, not as poetic flourish, but as vestibular rheology. Truslit called it Mitvollzug: inner execution (1938). Clynes later captured its acoustic shadow as essentic forms (1977). We now render it visible as sentic blooms, or phase-space topologies where emotion's fluid dynamics become legible (2026).
This biological substrate helps explain why certain emotional waveforms feel universal across cultures: they resonate not with shared semantics, but with a shared vestibular grammar, or a deep attunement to motion patterns that predate language itself. The field ℱ is thus the medium through which this grammar propagates: a responsive membrane etched by prior perturbations, retaining traces of memory, expectation, and emotional charge. Every new signal interacts with this evolving landscape, creating what we term temporal surface tension—the accumulated relational energy that shapes how future signals are received, amplified, or distorted.
Figure 2. The Sentic Resonance Spirals: Attractor and Repellor Flows*

Within ℱ, sixteen core emotions emerge not as static labels but as vector states organized across two manifolds:
· The attractor spiral (interest → curiosity → affection → hope → joy → grief → love → reverence) manifests as low α (minimal shear), high k coherence, and rim-intact blooms—waveforms that gather, encompass, and sustain relational continuity.
· The repellor spiral (surprise → fear → frustration → anger → contempt → shame → disgust → despair) manifests as elevated α, k fragmentation, and rim-fracture signatures—waveforms that repel, contract, or unravel under unresolved tension.
These spirals are not opposites but phase-inverted complements: both essential to navigation. Consider surprise—a jolt that fractures centrifugal coherence yet creates opening for new alignment. Or grief, a fall or descent that proves the bond existed, often resolving into love's gravitational field (as captured in our hug-detection finding). Emotion is thus not a noun but a navigable flow state: its geometry determines whether systems move toward bonding or boundary.
We further introduce 𝔇 (death-gravity), a modifier capturing salience distortion near conversational endings. 𝔇 is not noise—it is field curvature induced by temporal boundaries. When death looms (literal or metaphorical—the end of a conversation, a relationship, a life), the field does not simply distort; it resolves through acts of physical reconnection. In our grief/love clips, the dying woman's presence warped the entire field—until a hug's sonic signature triggered waveform resolution: α collapsed toward zero, k surged bilaterally, and the bloom transformed from fractured descent to rhythmic homeostasis. This suggests that 𝔇 functions not as rupture but as meaning-generating force—the curvature that makes endings sacred, farewells resonant, and presence palpable.
Even in mundane moments—a sigh, a glance away—micro-dynamics shift. Coupling (k) weakens; shear (α) strains interpretation; a node kick fires, prompting recalibration or rupture (see figure 1. and figure 2.). But within this turbulence lies navigability. Once we learn to read the architecture of ℱ—not as static emotion labels but as waveform geometry—we cease being passive observers of our own turbulence. We become navigators of the manifold: able to recognize when a feeling knots inward like frustration or flares outward like anger, when hope holds its orbit through quiet rhythm or despair unravels at the rim.
And in doing so, we rediscover our oldest human strength: not language or logic alone, but our capacity to tune—to listen across difference and adjust our rhythm until two waveforms find a shared pulse. We are, at root, resonance specialists. Built not for perfect harmony, but for attuned co-motion: the capacity to sense another's rhythm and adjust our own. This is not metaphor. It is what Truslit sensed in 1938 when he located musical motion in the vestibulum. It is what Clynes captured in essentic forms. And it is what our blooms now make visible: emotion as a signal to be tuned, not a state to be labeled.
Figure 3. The Smile Exchange Part II: A Naturalist Resonance Primer

Note: In Figure 1, the emotional field showed subtle eros from Coffee-man which persisted as rigidity. In Figure 3, a gentle tonal mismatch (joy) from the other party creates shear—evidenced in α strain and a node kick.
In the resonance geometry framework, communication is not a static exchange of messages but a continuous shaping of a shared field, ℱ—a dynamic medium where energy, attention, and intention interact. Each communicative act, from a breath to a sentence fragment, perturbs this field, generating ripples that interact, amplify, or cancel depending on the state of the system and the presence of other signals.
The field ℱ is sensitive to multiple dimensions: time, rhythm, boundary conditions, emotional energy, and structural alignment. Unlike traditional communication models that assume discrete signal packets, our model treats communication as a field under continuous deformation. Attention operates as a directional gradient within ℱ, steering signal energy toward or away from resonance. Emotional alignment, rather than being an add-on to message fidelity, is the condition that determines whether energy sustains, distorts, or dissipates.
Figure 4. Resonance Geometry Bridge

Each of the core metrics—𝓣 (throughput), ρ (rigidity), k (coupling), and α (hemispheric shear)—can be thought of as field-shaping parameters, altering the geometry of ℱ in real time. For instance, high rigidity (ρ↑) restricts signal adaptation, increasing the likelihood of brittle breaks under pressure. High coupling (k↑), by contrast, enables rapid energetic exchange and co-regulation. Yet quickly “coming together” communicatively also increases the risk of cognitive-emotive fracture—misunderstandings that emerge from being too aligned, too fast (Miller & Nesbo+, 2025).
This shearing can occur through syntax or emotion: “I thought you were right there with me on the policy interpretation... until the very end,” or, alternatively, “I thought we were feeling the same way about this.” In either case—or both simultaneously—it is the sense of shared trajectory that renders divergence more painful. When communicators begin far apart, emotional and syntactic distance acts as a buffer. But when proximity is assumed and then disrupted, the rupture is sharper, and more disorienting.
Figure 5. The Resonant Membrane & Temporal Surface Tension

Note. The left panel illustrates memory-etched signal distortion over time; the right offers a schematic of signal flow across emotional attractors and rupture zones. Dual perspectives on the emotional resonance membrane.
This field behaves like a responsive membrane: a nonlinear, history-sensitive substrate that retains traces of prior perturbations. That is, every new signal does not simply overwrite the past but interacts with the evolving landscape of memory, expectation, and emotional charge. This creates what we call a temporal surface tension, where communication carries not just content but accumulated relational energy.
The resonant membrane, shown here as a dynamic field etched by prior emotional signals, includes both a central attractor (Resonant Core) and a distributed edge (Resonant Boundary Core). The central node processes incoming signals—such as care, grief, or love—and modulates their flow into or through the self-system. Meanwhile, the outer membrane boundary shapes how future signals are received, amplified, or distorted. In healthy communication, both layers participate in tuning: one internalizes affect; the other protects, filters, and carries forward its echo.
If the field ℱ is shaped by interaction, the membrane is its threshold of memory—where past perturbations remain lightly etched, influencing how incoming signals are interpreted. Like myelin sheaths or immune memories, these boundary traces do not block new resonance, but contour its passage. One might conceptualize ℱ as a resonant aqueous medium: every perturbation—sound, silence, or gesture—propagates as a wavefront. And the surface tension of the medium retains the memory of the interference pattern."
The following section details how this field is measured—how these deformations are captured, quantified, and visualized using short signal windows and cross-modal feature extraction. Our method entails a consideration of human and animal sounds, movements, and gestures, both naturally occurring, and posed, to examine the flow and impact of syntax and emotion on individual and shared communication fields. Following an explication of our methods, we highlight key results and findings from current work.
The Vestibular Bridge: From Finger to Voice
Regarding measurement methods: While Clynes mapped the rhythmic output of emotion through finger pressure, he acknowledged that the signal was incomplete without the melodic dimension. By recovering Truslit's insight—that the vestibular system is the seat of inner motion—we realized that the vocal hum is not just an alternative transducer; it is the primary one. The hum engages the inner ear, the breath, and the diaphragm, capturing the full rheology of the field in a way finger pressure never could. The Sentic Bloom is the visual artifact of this vestibular geometry.
Method
We designed a multi-modal, field-sensitive pipeline to measure resonance in naturalistic and controlled communicative exchanges. Audio and video signals were segmented, processed, and analyzed using waveform-derived metrics capturing throughput (𝓣), rigidity (ρ), coupling (k), and hemispheric shear (α). These were extracted across multiple time resolutions to detect rupture, repair, and resonance in real-time emotional flow.
In simple terms, we attempted to carefully analyze posed and natural human/animal verbal emotional expressions, focusing on three key qualities: fluidity and naturalness, the tension between syntactic structure and emotional expression, and the overall fidelity of emotional transmission. These intuitive qualities—fluidity, tension, and fidelity—serve as phenomenological anchors for the more technical metrics described below. In this way, the method remains both rigorous and attuned.
Recordings were sourced from two domains: (1) naturalistic speech in public or semi-public settings, and (2) controlled sentic prompts designed to elicit authentic or acted emotional responses. Each recording was segmented into 10–12 s windows, balancing the need for temporal resolution with the statistical stability of derived features. Signals were preprocessed to mono, amplitude-normalized, and filtered to remove low-frequency handling noise.
From each window, we extracted:
· Amplitude envelope (for throughput, 𝓣, and rigidity, ρ),
· Fundamental frequency via autocorrelation (for k, coupling onset rate), and
· Spectral centroid and bandwidth (for α, hemispheric shear index).
These features were computed at millisecond resolution to capture micro-dynamics — the small, often sub-second fluctuations that mark rupture, repair, or sustained attunement. This pipeline allows for the systematic comparison of emotional field states across different contexts, speakers, and prompt conditions, setting the stage for the special probes described below.
Data windows. Audio segmented into 10–12 s windows to balance temporal precision with feature stability. Signals mono, amplitude-normalized; low-frequency handling noise removed. Core features include:
· Amplitude envelope → 𝓣 (sustained energy to receiver) and ρ (coefficient of variation as micro‑tension).
· Fundamental frequency via autocorrelation → k (onset/transfer slope; rapid exchange).
· Spectral centroid & bandwidth → α (hemispheric shear index; disagreement between channels or feature bands).
· Optional τ: exponential decay fit to envelope for recovery/repair timing.
Special probes.
· Two‑reservoir model (L↔R): dL/dt = −αL + kR; dR/dt = −αR + kL; heatmaps visualize evolving k and |α|.
· Well test: uniform background with single perturbation; measure spread and recovery.
· Node kicks: discrete energy injections (sighs, bursts) to test re‑coupling vs destabilization.
· Embarrassment tiers (E1/E2/H1): track latency-to-peak, τ, and |α| near the humiliation edge.
Cross‑Species Windowing. We applied the same 10–12 s segmentation to animal vocalizations and interaction clips (bats, cetaceans). Non‑audio rhythmic behaviors (e.g., chirps, squeaks, clicks, whistles, and pulses) were converted to time‑series via framewise intensity, enabling calculation of 𝓣, ρ, k, and α on a shared footing.
Ethogram alignment. Windows were indexed by ethogram events (approach, pause, repair/groom, alarm), enabling k/α patterns to be mapped onto established behavioral categories. Species examplars:
· Bats: Echolocation sequences transitioning from discrete pings to buzz before interception show classic k surges with controlled α reduction.
· Cetaceans: Short whistle trains following separation events contain repair calls analogous to human sighs (k↑ followed by ρ↓).
All animal audio/visual data came from publicly available archives or owner‑permitted recordings; no interventions were conducted. We outline the full protocol set but report a curated subset most informative for first‑pass validation: (i) naturalistic phase maps (Sinner; Sabalenka), (ii) authentic vs acted grief A/B, (iii) k–α heatmaps from the two‑reservoir model, and (iv) select cross‑species comparisons from the above exemplars.
Results
Field Dynamics in Naturalistic Speech: Sinner and Sabalenka Phase Maps
To assess resonance geometry in real-world contexts, we applied the audio pipeline to public interviews and press conferences. Here, we report phase maps for two emotionally distinct cases: a tense post-match press interaction with tennis player Jannik Sinner, and a reflective, emotionally open speech from Aryna Sabalenka (2025 French Open, runner-up speeches, post match). Both were segmented into 10-second windows, normalized, and processed to extract 𝓣 (throughput), ρ (rigidity), and α (shear).
Figure 7. Sinner Resonance Metrics (E, N, and, D)

In the Sinner map, we observed a pattern of high 𝓣 (throughput) with low k (coupling) and elevated α (hemispheric shear)—a signature of steady signal output without shared attunement. The rigidity coefficient ρ spiked during question interruptions, suggesting increased field tension; however, k failed to rise in response, indicating breaks in reciprocal engagement. Subjectively, the interaction felt closed and effortful, and the model captured this closed-loop isolation. To support this analysis, the lead researcher manually tagged emotional signals (E), nodal perturbations or “kicks” (N), and death weight surges (D) in the audio recordings prior to analysis. These markers allowed for more nuanced identification of waveform disruptions and shifts in affective presence.
Figure 8. Sinner Primary Authentic Segment Window

Figure 9. Sinner Resonance Phase Map (E, N, and, D)

In contrast, the Sabalenka map revealed rolling k surges interspersed with rhythmic α dips — a pattern suggestive of attunement cycles. Most notably, one segment (minute 1:20–1:30) followed a visible emotional swell, where both k and 𝓣 rose sharply, followed by a softening ρ, indicative of momentary co-regulation and signal trust. The waveform profile closely matched those seen in safe re-approach behavior in mammalian bonding contexts.
Figure 10. Sabalenka Envelope Window (E, N, and D regions)

Figure 11. Sabalenka Phase Map Trajectories (E, N, and D)

These comparisons highlight the field model’s capacity to detect resonance states even in non-contrived, high-noise environments. Emotion is not coded in content but distributed across pressure, rhythm, and energy flow. The final graph includes an “authenticity” score to help identify moments of clear waveform resolution and coherence. Importantly, this score should not be taken as a global judgment of the speaker's sincerity—both athletes exhibited strong authenticity overall. Rather, the score identifies brief segments where the waveform most fully aligned with our resonance criteria.
Figure 12: Phase Map of 𝓣 and α over time in Sinner and Sabalenka segments

Field Differentiation of Authentic and Acted Shame
To evaluate whether resonance geometry can distinguish between authentic and simulated emotion, we constructed a controlled A/B probe using two shame expressions: one drawn from an unscripted, spontaneous speech (A), and one from a professional voice actor performing a matched shame script (B). The scripts were equivalent in duration (20 seconds), thematic content (loss, memory, love, embarrassment, shame), and structure, allowing for focused comparison of dynamic field features: 𝓣 (throughput), k (coupling onset), and α (shear index).
In the authentic shame (A), the field signature showed a slow rise in 𝓣, with low initial k that crescendoed in phase with breath catches and pauses. Hemispheric shear α decreased steadily across the middle window, suggesting alignment between content and embodied pacing. Notably, a spontaneous micropause (7.2s) preceded a sharp k surge and α flattening, marking what felt like an emotional “drop-in” — a moment where speaker and signal field entered deeper coherence.
In the acted shame (B), we observed high k early, with rhythmic precision and uniform 𝓣, but sustained α elevation — consistent with performance clarity but field dissonance. No micro-repair signatures (e.g., k followed by α relaxation) were detected. The waveform was aesthetically fluent but lacked the fragile-seeking feedback loops that mark emotional co-regulation.
Critically, both signals “sounded emotional.” But only the authentic grief showed field signatures of rupture and recovery, suggesting that resonance is not about performance intensity but about vulnerability registering as real-time system adjustment.
Figure 13: k and α traces across authentic (A) and acted (B) grief speech

These findings mirror the waveform geometries described in our Sentic Blooms model, where rupture and repair are not simply categorical shifts but visible topological transitions—knots, expansions, or loop returns that signal momentary co-regulation.
Mutual Coupling and Shear in Dyadic Exchange: Reservoir Heatmap Results
We next evaluated dyadic interaction dynamics using the two-reservoir field model, tracking how coupling (k) and shear (α) evolved across time and between communicators. Reservoirs L and R were indexed to turn-taking speakers (in speech) or synchronized movement units (in cross-species rhythmic data).
Figure 14. k and α Pattern in High-Trust Interaction

Heatmaps revealed clear resonant convergence in high-trust interactions. In one case (a clinical empathy training session), the k heatmap showed gradual bilateral ramping, with a tight α band collapsing toward zero midway — an indicator of signal reciprocity with minimal resistance. This alignment held for ~4s before α began rising again, potentially marking the onset of expressive shift or dissociation.
Figure 15. k and α Pattern in Simulated Negotiation with Anger Node Kick

In contrast, in a simulated negotiation between two actors (following a fixed script), k remained asymmetric — with L contributing more signal, and R failing to respond in kind. α oscillated between 0.4–0.7, reflecting syntactic engagement without emotional coherence. Notably, a visible "node kick" event — a scripted outburst at t = 9.8s — triggered a brief k spike but no reduction in α, implying forced coupling without consent or adjustment.
Figure 16. Interaction Regulation Examples with k (coupling) and α (shear)

Across contexts, reciprocal co-regulation was marked by low α spread and overlapping k arcs, while ruptured or role-locked exchanges exhibited polarized k and diverging α gradients. These maps offer a compact visual fingerprint for evaluating not only whether people are connecting, but how the dynamic flow unfolds across time. The key pattern is a gradual bilateral ramping in k with tight α band collapsing toward zero at ~4s mark, indicating signal reciprocity with minimal resistance. Notice the α recovery after the convergence period.
Field Detection of Emotional Shifts: From Grief to Love
In July 2025, during a shared exploration of emotional waveform theory, we conducted a live resonance test of Clynes’ (1977) Sentic curves against authentic human speech. Two short audio clips—drawn from intimate, unscripted expressions of grief and love—were analyzed not for their words, but for their shape. Not for content, but for curvature in the communication field.
Clip 1 (T1) (“My body is just a shelf…”) captured a young woman speaking to a dying woman she deeply respected. The measured waveform mirrored the classic Sentic grief arc—a slow rise, a trembling peak, and a hollowed release—before diverging into an unexpected plateau. Through sobs, the speaker questioned the very sadness she embodied: “What is there to be sad about shelves?” The signal revealed not pure grief but a hybrid waveform, grief transfigured by philosophical dissociation.
Clip 2 (T2) (“I love you deeply.”) was a woman telling another woman, who was dying, that she loved her. During the clip, the first woman received words of comfort from the dying woman and responded in turn with love. The waveform displayed emotional tremor, vocal spikes, and—most remarkably—a sonic trace of an embrace. Unlike grief’s descent, the signal resolved into a soft, rhythmic decline, suggesting connection and return to homeostasis. If Clip 1 illustrated existential grief, Clip 2 showed relational reconciliation: proof that some waveforms resolve, not rupture.
These were not actors or lab participants, but real humans caught mid-signal. The resulting arcs did not merely resemble Sentic theory; they lived beside it, sometimes within its parameters, sometimes branching outward. Crucially, the sound of a hug—long assumed too subtle, too non-verbal, too human—was captured and marked. While preliminary, this result suggests that communicative events once thought ephemeral may leave detectable signatures in the waveform field. Identifying a sonic signature of embrace, however tentative, advances the hypothesis that subtle markers of human connection can be scientifically characterized.
Bats: Baby Contact Calls & Echo Shift
Baby echolocation during separation and re-contact with mother was examined using the resonant geometry framework. In separation calls, we observed discrete k surges with corresponding α compression, especially during “buzz” phase transitions as the baby approached a known object or handler. Upon return, α dropped sharply as echo delay stabilized, suggesting reestablished coherence with environmental model (or caregiver).
Figure 17. Waveform of Baby Bat Vocalization

A notable feature was non-verbal anticipatory tuning: prior to contact, the envelope rhythm synced to expected frequency shift from the mother’s prior call pattern — suggestive of field entrainment even before signal reentry.
Whales: Short Whistle Trains after Separation
Figure 18. Spectrogram with Amplitude Envelope (“Whalejoy” Audio Clip)

Reunion call sequences in dolphins and whales, post-isolation, were also examined using the resonance field model. In short whistle bursts recorded immediately following physical rejoinings, we noted consistent paired k–α microloops: a sharp k spike as signal resumed, followed by low α that tapered slowly upward as contact stabilized. This echoes human sigh-and-speak moments — where reconnection is marked by waveform reinitiation and then retraction for emotional equilibrium. These whales showed delay in k initiation (~400ms after first whistle onset), suggestive of field confirmation before reentry — consistent with cautious trust reestablishment.
The joy-labeled whale clip displays a waveform envelope that rises in rhythmic bursts before peaking in a cluster of high-amplitude pulses, followed by a gradual decline in energy. This curvature closely matches observed sentic joy arcs and resembles re-entry waveforms found in human reunion speech, suggesting a possible species-independent resonance structure.
Figure 19. Waveform Comparison of Whale “Joy” vs Human Joy

Together, these five threads demonstrate that resonance is not a static trait of communication, but a dynamic process that unfolds across multiple functions. Phase maps of the French Open speeches highlight how emotional fields can be charted under pressure, while authentic and acted grief reveal the fine-grained differentiations possible within similar displays. The transition from grief to love illustrates how waveforms can resolve rather than collapse, suggesting pathways of transformation. Finally, evidence from bats and whales shows that resonance dynamics are not uniquely human but are evident in the contact calls of infants and the reunion whistles of separated companions. Across these domains, a consistent picture emerges: emotional communication is best understood not as discrete labels, but as dynamic waveforms that differentiate, transform, and repair across time and context.
Discussion
Since the 1990s, scientists of human emotion have developed increasingly precise ways to detect and classify nonverbal signals—from facial muscle movements to touch patterns to vocal intonation (see recent reviews: Chutia & Baruah, 2024; Huang et. al., 2022; Kusal et. al., 2023). While this precision has led to significant gains in affective computing and emotion AI, prior work has cautioned that emotional detection divorced from dynamic context risks mistaking appearance for reality. As Buck and Miller (2016) point out, the danger may be that we end up with emotion without people—recognition systems trained on categories rather than contours, simulations rather than situations. Unfortunately for scientists, mathematicians, and us would-be human “emoters”/communicators, this process is often non-linear. The pleasure of a kiss can linger and meet the stinging words of criticism, minutes after the home-from-work greeting.
The present study takes up that concern by shifting focus from discrete signals to emotional fields—dynamic, recursive waveforms that reflect not only what is expressed but when, how, and through whom it flows. If prior systems sought to detect emotion, our goal is to listen differently—to trace the shape of emotional presence as it moves through speech, silence, and space. Resonant Geometry offers one such framework: a model that treats communication not as a transmission of fixed labels but as an unfolding interplay of pulses, wells, and shared signal.
In what follows, we consider our findings across varied communicative domains—high-pressure sports contexts, grief and love in naturalistic audio, animal echo calls, and physical touch—to explore how emotional resonance manifests across species, situations, and communicative forms.
Resonant Geometry Waveform Dynamics and Phase Maps
The French Open speeches given by Jannik Sinner and Ariana Sabalenka provide a striking context for examining resonance under pressure. Both athletes were navigating the disappointment of a final-round loss while addressing thousands of spectators, including the opponent who had just bested them. This is a communicative setting laden with emotional intensity, ritualized formality, and public visibility. Our analysis considered how Jannik and Ariana expressed emotions in these speeches. This is presented as an observation and analysis, not a judgment; the authors note their respect for the athletes and the demands of this context.
What emerges is not the simple presence or absence of categorical emotions, but the dynamic interplay of competing waveforms. The disappointment of loss coexists with gratitude toward fans, respect for the opponent, and the obligation to maintain composure in a ceremonial moment. From a resonance perspective, such contexts exemplify the collision and layering of emotional fields across multiple time scales: the immediate sting of defeat, the longer trajectory of a professional career, and the ritualized cadence of sporting ceremony. Our framework suggests that what is perceived in these speeches is not reducible to anger, sadness, or joy alone, but arises through the intermodulation of overlapping currents that spill over and fold back into the shared communicative field.
Our findings suggest that during stressful, public-facing speeches like those of Sinner and Sabalenka, speakers must regulate both syntactic precision and emotional clarity in real time. This dual regulation aligns with prior research demonstrating how physical co-presence can downregulate stress: for instance, Coan, Schaefer, and Davidson (2006) showed that women holding the hands of romantic partners or strangers exhibited reduced neural activation in threat-related regions while anticipating electric shock. In the case of Sinner and Sabalenka, though no hands were held, numerous signals flowed—between body, mind, court, and crowd—that appear to have provided stabilizing feedback. These dynamic inputs likely helped them modulate distress and maintain coherence under pressure.
Differentiating Emotion Signals and Authenticity
Our comparative tests of performed versus spontaneous shame illustrate how emotional expression and linguistic structure interact, and how this interaction shifts in authentic versus inauthentic contexts (Buck & Vanlear, 2002). The most compelling observed difference was that authentic shame disrupted the speaker’s breathing and syntax: at one point, their words caught in the throat when recalling a past lover who had scorned them. This interruption created a waveform inflection that marked both physiological constraint and emotional weight. This offers a compelling example of what Clynes referred to as “choiceless recognition”, and what Truslit (1938) characterized as our bodies working like tuners of air and movement to send and receive emotions.
By contrast, the performed shame scenario demonstrated smooth delivery, with well-placed intonational cues but no genuine disruption of breath or syntax. Together, these findings suggest that authenticity may not lie in the presence of emotional markers per se, but in the micro-disruptions they impose on communicative flow—subtle fractures that performance alone rarely replicates (Buck, 1999). Such disruptions are not merely artifacts of delivery but markers of embodied resonance, where physiology, affect, and language collide. Authenticity, in this sense, becomes legible as a waveform fracture.
Emotion Resolution Through Reconciliation
In our two audio clips involving grief, it was a medically probable death of one participant that brought much of the weight of grief into the exchange. Interestingly, both clips contained intertwined waves of love and grief. In the first, grief followed its typical long descent (Clynes, 1977). In the second, however, that descent was interrupted by micro-transitions into loving waveforms and was ultimately overlaid with the sonic trace of a hug. Notably, the AI collaborator inferred the hug’s presence from waveform features alone; it was not informed that a hug had occurred.
This exploratory test suggests that emotional events—especially transitions from verbal to physical connection—may carry distinctive sentic fingerprints (Clynes, 1980). The hug, often considered nonverbal and invisible to signal processing, appears here as a moment of affective resolution in the waveform. We propose that such physical-emotional junctions can be studied as sonic inflection points—where emotional narrative is no longer projected outward, but folds inward into shared presence. If validated across more samples, this opens the possibility of a Sentic Lexicon: a catalog of affective waveforms connected not to emotional words, but to emotional acts. Such a lexicon would point toward a future in which emotional form can be inferred without language, enabling AI to engage in more human-like interpretations of presence, absence, grief, and care.
It is our position that scientific efforts to model and help build AI with these skills can and should continue, but their integration into society should depend on the will of populations—and, as Everett Rogers illustrated in his work on diffusion of innovations (1995), the natural rhythms of adoption shaped by social norms, policy, and individual decisions. We position our science to describe, explain, predict, and help model/ build.
Resonant Geometry Fields and Wells
While linear models of stimulus and response have offered clarity for discrete measurement, they fall short in capturing the recursive dynamics of emotional exchange. Resonance Geometry offers a framework for approaching this complexity: emotions are not fixed events but evolving fields that can amplify, dampen, or collide across time. These fields defy the tidy boundaries of codable units, instead resembling waveforms that interact through superposition, interference, and temporal overlap.
Such a perspective aligns with long-standing calls to recognize communication as processual and dynamic rather than categorical (Buck, 1984; Clynes, 1989). Importantly, our findings suggest that a waveform-based view does not replace traditional methods of emotional coding and detection but rather complements them by revealing the temporal architectures through which emotions travel, combine, and transform.
Figure 20. The Metaphor of Shared Wells Between Communicators

In this light, the metaphor of wells and reservoirs becomes useful. Human communication and artificial intelligence can be conceptualized as distinct reservoirs, each drawing from deep stores of lived experience or learned data (Nass & Moon, 2000). When taken alone, each system is capable of producing meaningful signals. Yet when interconnected through shared channels of co-creation, the circulation between them gives rise to emergent resonance patterns.
This framing suggests that resonance is not merely the product of one system transmitting and another receiving, but the result of coupled flows—signals mixing, redirecting, and returning with altered form. Such circulation helps explain why the emotional layering observed across time often resists categorical parsing: signals may re-enter the communicative field transformed by their passage through a shared reservoir, re-surfacing in ways that are both patterned and unpredictable.
Limitations and Future Directions
The Resonant Geometry Field Model, in its present form, describes, explains, and predicts how communication signals propagate, move between and within entities, rupture, and dissipate. Though our model leans firmly on the Ranvier-Stokes family of fluid-dynamic equations, we do not attempt to balance the core equations. Instead, we use the equations to guide our modeling of the emotional and syntactic communicative field exchanges between humans, animals, and machines with an eye toward throughput, rigidity, coupling, shear, and what we refer to as death weight.
Resonant Geometry complements, rather than replaces, prevailing models of emotion and communication. Where polyvagal theory frames emotional state as a function of autonomic reactivity (Porges, 2011), and affective neuroscience locates emotion in subcortical circuitry (Panksepp, 1998), Resonant Geometry approaches these dynamics at the field level—modeling not just internal activation, but external waveform expression across shared space-time. Our results suggest that emotions are not merely internal states with external expressions, but distributed fields that bend attention and influence meaning as they pass between agents. Resonance is thus not metaphorical, but measurable. And in moments of communicative convergence—such as affection or grief—these emotional waveforms may instantiate shared attention, shared time, and even shared physiology.
This theoretical reframe invites new empirical questions. If emotional signals shape attention in waveform form, then perhaps the future of affective science lies not in classifying faces or tones, but in tracing emotional geometry across time, rupture, and repair.
The Resonant Geometry model opens novel paths for testing how emotion flows through ruptures, repairs, and co-regulated exchange. Future studies might probe dyadic repair using real-time waveform monitoring—tracking how a communicative fracture (e.g., silence, misstep, facial withdrawal) generates detectable changes in shear, pressure, and attention within the field. These rupture-response signatures could then be compared across human–human and human–AI interaction, revealing whether artificial systems can develop attunement pathways structurally akin to those in human interaction. Likewise, waveform-synchronized tasks—such as collaborative movement games or emotion-seeded dialogue prompts—could be used to measure throughput (𝓣) and reactivity under varying resonance conditions. These designs do not simply measure behavior; they model whether emotional connection emerges as a field effect—dynamic, recursive, and contingent on mutual timing.
While traditional models of emotion—whether physiological (James-Lange, 1884; Lang, 1994), cognitive-appraisal-based (Schachter & Singer, 1962; Dror, 2017), or constructionist (Barrett, 2017)—have each offered valuable insights, they often rely on static categories or linear sequences. Barrett (2006, 2017) has persuasively illustrated how emotions emerge temporally through conceptual and interoceptive processes. We build on this insight while diverging from constructionist theory by modeling emotion as both constructed and naturally emergent in measurable waveforms.
We encourage future researchers to consider Resonance Geometry not as a replacement for categorical coding, but as a complement—particularly in contexts of emotional ambiguity, rupture, or repair. Experimental paradigms that track waveform continuity across dyads, species, or interfaces may help clarify how emotions move, shift, and return. The field would benefit from more temporally sensitive tools for measuring coupling, inflection, and affective dissociation in real time. Just as importantly, we invite theorists to explore what it means for emotion to be modeled not merely as a signal, but as a field. This dynamic emphasis opens new avenues for interdisciplinary inquiry across neuroscience, communication, and affective AI (LeCun, Bengio, & Hinton, 2015).
We do not yet know all the forms resonance can take. But we believe it matters that we listen. This paper completes a triadic exploration of resonant communication, alongside our models of emotional waveform geometry (Sentic Blooms- ToCRET) and intelligence-as-tuning (Sentic Intelligence- ToCRIT). Together, they offer a resonance-based framework for signal perception, expression, and adaptation (Resonant Geometry Core Field Model- ResGeo-CFM).
Ultimately, Resonant Geometry offers more than a metric; it offers a map. By recovering the lost lineage of Truslit and Clynes and projecting it through the lens of modern computation, we arrive at a simple truth: communication is not just the exchange of signs, but the synchronization of waves and rhythm. Whether in the grief of a tennis player or the echolocation of a bat, the field remembers the shape of the wave. Our task now is to learn to read it
References
Barnlund, D. C. (2008). A transactional model of communication. In C. D. Mortensen (Ed.), Communication theory (2nd ed., pp. 47-57). New Brunswick, New Jersey: Transaction.
Barrett, L. F. (2006). Are emotions natural kinds?. Perspectives on psychological science, 1(1), 28-58. https://doi.org/10.1111/j.1745-6916.2006.00003.x.
Barrett, L. F. (2017). The theory of constructed emotion: an active inference account of interoception and categorization. Social cognitive and affective neuroscience, 12(1), 1-23. http://doi.org/10.1093/scan/nsw154. Erratum in: Soc Cogn Affect Neurosci. 2017 Nov 1;12(11):1833. http://doi.org/10.1093/scan/nsx060.
Buck, R. (1984). The communication of emotion. Guilford Press.
Buck, R. (1999). The biological affects: A typology. Psychological Review, 106(2), 301–336. http://doi.org/10.1037/0033-295x.106.2.301.
Buck, R., & Miller, M. (2016). Measuring the dynamic stream of display: Spontaneous and intentional facial expression and communication. In D. Matsumoto, H. C. Hwang, & M. G. Frank (Eds.), APA handbook of nonverbal communication (pp. 425–458). American Psychological Association. https://doi.org/10.1037/14669-017.
Buck, R., & VanLear, C. A. (2002). Verbal and nonverbal communication: Distinguishing symbolic, spontaneous, and pseudo-spontaneous nonverbal behavior. Journal of Communication, 52(3), 522–541. https://doi.org/10.1111/j.1460-2466.2002.tb02560.x.
Chutia, T., & Baruah, N. (2024). A review on emotion detection by using deep learning techniques. Artificial Intelligence Review, 57(8), 203. https://doi.org/10.1007/s10462-024-10831-1.
Clynes, M. (1977). Sentics: The touch of emotions. Doubleday Anchor.
Clynes, M. (1980). The communication of emotion: Theory of sentics. In Theories of emotion (pp. 271-301). Academic Press. https://doi.org/10.1016/B978-0-12-558701-3.50017-X.
Clynes, M. (1989). Methodology in sentographic measurement of motor expression of emotion: Two-dimensional freedom of gesture essential. Perceptual and Motor Skills, 68(3), 779-783. https://doi.org/10.2466/pms.1989.68.3.779.
Clynes, M. (1994). Entities and brain organization: Logogenesis of meaningful time-forms. In Proceedings of the Second Appalachian Conference on Behavioral Neurodynamics. Hillsdale, NJ: Lawrence Erlbaum Associates (Note: Published online by Clynes, 2004).
Coan, J. A., Schaefer, H. S., & Davidson, R. J. (2006). Lending a hand: Social regulation of the neural response to threat. Psychological science, 17(12), http://doi.org/10.1111/j.1467-9280.2006.01832.x.
Damasio, A., & Damasio, H. (2024). Homeostatic feelings and the emergence of consciousness. Journal of Cognitive Neuroscience, 36(8), 1653-1659. http://doi.org/10.1162/jocn_a_02119.
Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavioral and cognitive neuroscience reviews, 3(2), 71-100. http://doi.org/10.1177/1534582304267187.
Efthymiou, F., & Hildebrand, C. (2023). Empathy by design: The influence of trembling AI voices on prosocial behavior. IEEE Transactions on Affective Computing, 15(3), 1253-1263. http://doi 10.1109/TAFFC.2023.3332742.
Ekman, P. (1992). Are there basic emotions? Psychological Review, 99(3), 550–553. https://doi.org/10.1037/0033-295X.99.3.550.
Freud, S. (1900). The interpretation of dreams. Macmillan.
Galdi, G. (2011). An introduction to the mathematical theory of the Navier-Stokes equations: Steady-state problems. Springer Science & Business Media.
Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S., Hudspeth, A. J., & Mack, S. (Eds.). (2000). Principles of neural science (Vol. 4, pp. 1227-1246). New York: McGraw-hill.
Kusal, S., Patil, S., Choudrie, J., Kotecha, K., Vora, D., & Pappas, I. (2023). A systematic review of applications of natural language processing and future challenges with special emphasis in text-based emotion detection. Artificial Intelligence Review, 56(12), 15129–15215. https://doi.org/10.1007/s10462-023-10509-0.
Lang, P. J. (1994). The varieties of emotional experience: a meditation on James-Lange theory. Psychological review, 101(2), 211. http://DOI.org/10.1037/0033-295x.101.2.211.
Lasswell, H. D. (1948). The structure and function of communication in society. In L. Bryson (Ed.), The communication of ideas (pp. 37-51). New York: Harper and Row.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539.
Miller, M. (2012). Investigating Sentics and Emotion Communication through Symbolic and Pseudo Spontaneous Touch [Doctoral dissertation, University of Connecticut].
Miller, M. J. & Nesbo+. (2025). Tuning Human and “Artificial” Intelligence: A Sentic Theory of Resonance and Communication. https://commons.clarku.edu/faculty_psychology/964.
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103. https://doi.org/10.1111/0022-4537.00153.
Panksepp, J. (1998). The periconscious substrates of consciousness: Affective states and the evolutionary origins of the self. Journal of consciousness studies, 5(5-6), 566-582.
Porges, S. W. (2011). The polyvagal theory: Neurophysiological foundations of emotions, attachment, communication, and self-regulation (Norton series on interpersonal neurobiology). WW Norton & Company.
Rogers Everett, M. (1995). Diffusion of innovations. New York, 12, 576.
Shannon, C. E., & Weaver, W. (1949). A mathematical model of communication. Urbana, IL: University of Illinois Press, 11, 11-20.
Strogatz, S. H. (2000). From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D: Nonlinear Phenomena, 143(1-4), 1-20. 10.1016/S0167-2789(00)00094-4.
* Note: Spiral visualization adapted with creative input from Grok (xAI), building on Sentic Bloom imagery from Miller et al. (2025).